HadCRUT4.5 anomaly for September 2017 = 0.54C

The HadCRUT4.5 temperature anomaly for September calculated by spherical triangulation is 0.54C, a fall of 0.17C since August. Temperatures have seemingly returned to a long trend after the 2016 El Nino.

Monthly temperature anomalies for HadCRUT4.5 (HadSST3 and CRUTEM4.6 stations data) calculated by spherical triangulation method.

The temperature distribution in the Northern Hemisphere for September is shown below. All temperatures are relative to a 30 year norm from 1961-1990.

Temperature distribution for September 2017 over Northern Hemisphere

and for the Southern Hemisphere

Temperature Distribution over the Southern Hemisphere.

which shows a La Nina type pattern.

About Clive Best

PhD High Energy Physics Worked at CERN, Rutherford Lab, JET, JRC, OSVision
This entry was posted in AGW, Climate Change and tagged . Bookmark the permalink.

27 Responses to HadCRUT4.5 anomaly for September 2017 = 0.54C

  1. Charles May says:


    I am delighted to see what you have presented on the H4 data. Certain things have happened with my cyclic analysis of the H4 data and also the Nino regions including the Multivariate ENSO Index (MEI),


    I have analyzed the October data as well. In comparison with previous information I have added more information to my charts.

    This is my latest version.


    I have added some new information. The chart now shows what the ECS value would be if CO2 alone were the cause of the temperature measurements. It reveals the slope of the linear fit and the ECS for CO2 is given for the cyclical fit. The correlation coefficient is very good. I also reveal tow cycles. The 67 year cycle by itself and that cycle combined with the DeVries cycle.

    The next chart simply shows that with as few as nine cycles I can capture the essence pf the measurements.


    The next chart shows the short-term projection outlook. It has features that I believe link to ongoing events in the Nino regions that I will show.


    Note the presence of humps in the year 2017 and note where the projection bottoms out in 2018. I think this seems to align with what I see in the Nino regions.

    Nino Regions

    I owe something to Javier on the analysis of these regions. I listened to him and I spent a lot of energy improving the cyclic analysis which I think I have done. Out of all the projections I think the ones for the Nino regions are the best. Clearly, simple examination of the Nino region data is seen to be little more than cyclic behavior.

    I attained these results by combining monthly, weekly and daily data. The measurements are monthly data until 1990. It is weekly data from 1990 until 2014 and it is daily data thereafter.

    Nino Region 3.4




    Examining these charts I don’t think it can be argued that they don’t adequately represent this region. Now, for the projection.


    The hump is 2017 is clearly visible and it seems to occur earlier in 2017 than what we see in the H4 analysis. Also it bottoms out in 2018 earlier than H4. It seems to precede H4 as it should.

    Without bothering with all the data fits for the other regions I will simply furnish the fit from 2014 and the projections for the other regions. My worst fit is with region 1.2. It has the noisiest data.

    Region 1.2



    Region 3.0




    In lieu of Nino region 4.0 I furnish the MEI.

    Overall it looks like this. These are only monthly data and I think they are more subject to change.



    These evaluations show similar characteristics.

    As a bonus I have also included my very recent evaluation of global RSS data. Normally, I don’t include projections from RSS or UAH data since the time record is not very long and is subject to radical change with the addition of another month’s worth of data. I include it here. It may be only coincidental.


    I included the pause line in the evaluation. Who knows it may return in the months ahead.


    Perhaps, this will not all hold together. Of the things I evaluate I have the most confidence in the evaluations of the Nino regions. Clearly, they are cyclical; it is unmistakable.

    I welcome any and all comments. T wrote hundreds of technical letters through the years. Peer review was part of that process.

    I do see things sort of coming together.

    I do apologize that I am still unable to get my charts through except through a link.

    • This NINO3.4 model fit of yours appears highly correlated, but how quickly does it diverge? (such as this one: https://1drv.ms/i/s!AkPliAI0REKhgZYc9ysJy9DwSpsF5w)

      How many sinusoidal factors does it contain? It is often easy to fit a profile given enough terms, as is known from Fourier analysis. Yet, since this will often involve closely spaced spectral components that have phases set to constructively cancel over certain intervals, they will constructively amplify elsewhere and any fit will wildly diverge outside of the training interval.

      But if the model fit is based on physics and the solving of (say) a differential equation that governs the behavioral dynamics, then it will not diverge.

      In contrast, if the ENSO dynamics is forced by a relatively few tidal factors, then it won’t have this problem of divergence. Here is a fit with only two lunar forcing factors (the nodal and anomalistic terms) applied to Laplace’s tidal equations:

      Most of the effort in fitting is to use cross-validation techniques, whereby a training interval is optimized and then compared to a test interval. Try your technique on a training interval of say 1880-1950 and see how it does outside of this interval.

  2. Alfons Mittelmeyer says:

    For me it was clear that this will happen. The high global temperature during the maximum of the el Nino filled the headlines. But I doubt, that this will happen now too.

  3. JCH says:

    The 50-year HadCrut 4 trend as of the end of July 2017 was .167 ? per decade; the 50-year trend before the El Niño started was .153 ? per decade. Are you claiming that August and September 2017 have reduced long trends from post El Niño to what they were before the El Niño?

    • Clive Best says:

      Let’s wait and see. However, Yes I do think the rate will reduce to pre-El Niño values. One of the reasons the rate remains constant is that emissions are forever increasing. If emissions stabilise and eventually reduce, the rate will reduce. If emissions fall by more than a half, CO2 levels will slowly begin to fall and so will temperatures. The most important thing right now is to stop annual increases in CO2 emissions. That alone will begin to stabilise the situation.

  4. crackers345 says:

    it’s just natural variability.
    it’s happening all the time.
    it does not disprove agw,
    by any means.

  5. nobodysknowledge says:

    The higher troposphere is still warming. (UAH october) Some lag in the system? Who knows where this will develop the next year?

  6. David Walker says:

    “he fact that co2’s influence has been
    measured, and there are
    no other factors shown”

    Utter drivel.

    There is zero EMPIRICAL evidence that atmospheric CO2 concentration has any SIGNIFICANT (ie distinguishable from noise) effect whatsoever on Global temperature.

    Mankind can no more SIGNIFICANTLY alter the temperature of the Earth than SIGNIFICANTLY alter the time the Sun rises and sets.

    You have no idea whatsoever what you’re wittering about.

  7. Charles May says:


    Thanks for taking the time to respond. I value your opinion.

    I have thought some of the issues you raised, and I then realized I might have an answer. This year I conducted a little study with the Nino 3.4 data. I used the data that was dated 04/01/2017. I then removed data up to five years to see how things would hold. I subtracted data from 1, 2, 3 and 5 years before this date to witness what would happen with predictions.


    Getting close to five years ago the solution predicted the recent El Nino, the following trough, and the hump in 2017. Here is a better view in a close-up.


    The 3-year line is the only line that noticeably varies. But it still reveals the same features.
    I am pleased with these results. If you compare this with how the ENSO models I am crushing them. I question if their predictions are worthwhile just a few months out.

    • I usually do a training run from 1880-1950 and see if it can predict the time series from 1950 to current. And even that is suspect because any results have the possibility of being tainted, usually done subconsciously — i.e. by eliminating poor performers .

      The way around this is by coming up with a model based on geophysical laws and quantitatively known forcing measures. There is no room here for eliminating a factor without having to abandon the model.

      • Charles May says:


        I have a little different take on this due to my experience.

        I started working on rotating equipment back in the mid-70s. I have had long experience with FFTs. I go back to the first desktop analyzer, the Nicolet 444. This is before PCs.

        We placed transducers of various kinds on and throughout the equipment. We would run the equipment and make the FFTs for all those transducers. Without computers we had to use a plotter to make the traces. After a test we might have thousands of traces to review.

        We did this for one reason. We wanted to know what the signature consisted of. We came up with the physical basis for many of the peaks afterward. We used the FFTs as an investigative tool. It would be nice to have the physical basis ahead of time. Of course we had some foreknowledge of what should be expected but in many cases we did not. Perhaps, if the goal is to eventually construct a model, maybe you should determine what you have to model before you get started.

        In almost all cases we came up with the physical explanations for what we had measured and having that understanding enabled us to implement remedial measures to mitigate the bad actors.

        From the start we never had a complete mathematical model of the equipment. We used what we learned from the tests to construct one.

        I think that can apply here to. Perform the signal analysis to determine which cycles you need to account for in your model.

        Paul, I wish I had a physical basis for the cycles I have identified. I think that is the next step but at least I now knw what I am looking for. Let us come up with the geophysical laws that apply. I would suggest you now have a head start from the cyclic analysis.

        You can see my pathway is a bit different.

        • Several criteria for determining the quality of a time-series model, combining the quality of fit with the # of variable degrees of freedom used to create the model. So if it’s a correlation coef that’s used to evaluate the quality, the criteria is to divide that by the # of degrees of freedom to as a score.

          The ENSO model I use is based on only two known geophysical factors and so would have a quality score of likely 10x better than your model — assuming you have at least 20 cycles in your model. This would give 20 X (frequency+ amplitude+phase), so at least 60 DOF as a comparison.

          Yet if there is a physical basis and the model is solid, then the number of degrees of freedom is less meaningful, as occurs with for example a tidal analysis program, which can have a few main factors for amplitude and phase and hundreds of harmonically related minor parameters.

          • Charles May says:


            I will freely admit that you have lost me. I went so far as to call my retired friend who conducted the signal analysis of our equipment under test. He admitted that he did not understand what you are doing either. At least my friend made me feel like I was not a total fool.

            We conducted the FFT analysis of the equipment to understand what frequencies were contained in the signal. Our goal then was to identify those frequencies that were in the signal that had a significant S/N ratio and reduce their magnitude. The unit that vibrates the least will be the most reliable.

            With those significant peaks being identifies we then had to get a physical understanding of where they originated and then implement remedial design improvements to mitigate them.

            This is essentially what I did with the actual Nino measurements. I identified those discrete signals that were present in the overall temperature signature. To do this I used Dr. Evans Optimal Fourier Transform (OFT). I use around 90 discrete frequencies to do this so that would be 270 parameters.

            I then took those parameters as inputs into a Marquardt procedure that minimizes the sum of the squares error that will yield the best fit to the measured data. Once I gained a good fit to the measured data I think it furnished license to project them modestly in time to see what resulted.

            In an earlier comment I pointed out that I cut up to five years of data off and the solution yielded by fitting to the data was able to project out five years from that earlier data that confirms what we are now measuring today. I think that alone is sufficient to confirm the validity of the fit.

            This is not a physically determined model but what I would suggest is that if the significant frequencies that are in this model can be associated with a physical basis then a comprehensive grasp of what is happening can be attained. That is precisely what we did with our rotating equipment. From the frequency analysis of the equipment we eventually understood what was going on physically and could remedy those situations.

            My review of ENSO models does not indicate confidence to me. I think something is missing or absent in those models. I offer this as substantiation:
            Javier furnished this website not long ago.


            Here is a screenshot from that site that reveals how the ENSO models were working:


            Here are two more from the NOAA website (There is one from April and a recent one):



            For the latter two and the first how can we judge that the models are working properly. From April to October those are drastic changes. Can these models be deemed useful for making projections?

            When my method can reasonably predict what is now happening from close to five years ago it would appear I am on the mark.

            I would be delighted to furnish the signals that make up the fit to the measured data. Perhaps, you would recognize some of them.

            To close, all I can do is point to our experience that worked and seems to working here too.

          • Agree that 90 discrete frequencies will do a good job of modeling ENSO. That’s around 3 degrees of freedom for every year in the ENSO record. Most DSP engineers would also agree that’s a sufficient # of parameters. So congrats on proving that Fourier series works to recover a time-domain signal!
            What’s more challenging is to do the same with 3 discrete frequencies and the non-linear wave equation that governs ENSO dynamics.
            You’re incorrect in stating that the consensus science around ENSO modeling is wrong. There’s lots of research in delay differential equations, the Zebiak-Cane model, etc — which is what I’m working on improving. As with most science, the improvements are incremental The trick is to follow the scientists that know what they are doing. Like every other area of scientific research, not every idea will pan out.

  8. crackers345 says:

    Paul wrote:
    “So congrats on proving that Fourier series works to recover a time-domain signal!”

    That doesn’t mean there is any physical reasoning behind such math.

    Any signal — any — can be expressed as a Fourier series. That doesn’t mean the major frequencies have physical causes, or that those frequencies will appear in the future.

    There is (much) more to physics than curve fitting.

    • Crackers345 says:
      “That doesn’t mean there is any physical reasoning behind such math.”

      Exactly! I think that is the point I was trying to get at. Consider tidal analysis, which is curve fitting but based on a physical mapping to precisely known periods.

      “There is (much) more to physics than curve fitting.”

      Except for the fact that all data lies on some sort of curve or manifold (in N-dimensional) space, I agree with that statement. Experimental physics is all about transforming measurements to a curve which can then be modeled. There are very few exceptions to this rule.

  9. Charles May says:

    You stated the following:

    “That doesn’t mean there is any physical reasoning behind such math.”

    It also does not mean there isn’t.

    When I get the vibration measurements off my equipment I guarantee you that is not curve fitting. Everyone of those discrete frequencies I see in the signature is real. It is then up to me to come up with the physical explanation of where those come from. It’s source may be electrical or mechanical but I guarantee they are real. This is not curve fitting.

    You seem to be ignoring the benefits of a very important investigative tool.

    Use the FFT to get a handle on what you are dealing with. You may be surprised. With understanding of what you have measured you will improve your chances of constructing a better model.

    You would have a very hard time convincing me that based upon performance that the ENSO models have much if any utility.

    • crackers345 says:

      why are the FT frequencies physical?

      • “why are the FT frequencies physical?”
        He has 90 different frequencies, so they obviously aren’t at this point.

        Conventional, well-accepted tidal analysis can have hundreds of frequencies, but these are all related by a combinatorial harmonic expansion of 6 fundamental frequencies. In the end, about 4 to 7 primary cycles are used to fit to tidal SLH readings.

    • crackers345 says:

      “With understanding of what you have measured you will improve your chances of constructing a better model.”


      All you have done is a curve fit to historical data, no physics, and no prediction of the future.

      • Yes, he has to do physics. My suggestion is to start with the simple models for ENSO dynamics such as delay differential and Zebiak-Cane and then work from there. Conventional Fourier analysis does not always work with non-linear differential equations, as is well known with respect to Mathieu equations, which arise in hydrodynamics and orbit calculations. The issue is that a modulation in the equation will lead to multiple splitting in the spectral components, so that it is hard to make sense of their origin.

  10. Charles May says:


    Let me first say I wish you success in your endeavors. You are citing things that are not in my experience. I worked on rotating equipment for 35 years. In those 35 years I could not have implemented the many design improvements I came up with without the benefit of FFT. I have probably looked at thousands and thousands of plots to seek the answers to the questions that needed answers.

    On the results of FFT I will simply say this. All frequencies and amplitudes that result from FFT are real and physical until shown otherwise. Maybe it happened over the years, but presently I can’t come up with an otherwise.

    The closest to that I can think of is that when we had a ground loop in the instrumentation we would get erroneous signals that we could identify.

    So, frequencies are identified that you don’t know where they came from. That is when the fun begins. I once had a peak with a very large S/N and I had no idea where it came from. I looked at the FFT output of other tests with different operating temperatures and in some cases, it was not there. I confirmed it was a fluid resonance when I took into account the speed of sound in water. I eventually did come up with a physical basis.

    The key is to identify the frequencies. The underlying physical reason for it can be discerned later.

    As I stated in an earlier reply the predictability of my analysis has been confirmed in my mind. When I analyzed data from five years ago and it still revealed the El Nino that recently passed and the modest hump we have seen in the data in 2017 I think I have done enough on that score. Case closed.

    In simple terms, I am trying to convince you to use all the diagnosis tools at your disposal. Perhaps, there is another physical process that needs to be included. Maybe the FFT would help you identify it. Maybe it won’t. Query the data and learn from it.

    Almost all the design improvements that I came up with came from empirical data and not mathematical models. The models were too immature (they did not include all the physical processes).

    To end it on a humorous note don’t be an Inspector Clouseau and forget your magnifying glass so you can’t see the fingerprints. Use the tools that are available to you.

    I sincerely wish you well in what you are doing.

Leave a Reply