Australian Raw temperature results

Homogenisation more often then not increases warming trends. ACORN-SAT is based on 112 stations, many of which are combinations of nearby stations which are then all homogenised.  However there are actually a total 1805 individual stations scattered about Australia, most of which are excluded from ACORN-SAT. These all have varying time coverage, but most of them still have sufficient data between 1961 and 1990 to calculate temperature anomalies based only on the recorded temperatures. As a result I have been able to compare the raw measurement result from all these stations against the ACORN-SAT homogenised result.

Figure 1: Difference between the homogenised result (ACORN-SAT) and the raw measurement result (1805 stations). Note how homogenisation results in an approximately linear increase of apparent warming with time, crossing zero at the centre of the normalisation period. See Figure 2 for overall result.

The main advantage of the triangulation method of integrating spatial data is that it copes very well with a changing mix of stations, since it avoids any binning. It also seamlessly combines stations at the same location, replacing older stations with newer ones automatically over time. Figure 2 shows the spatially averaged ‘Raw’ results as compared to ACORN-SAT, both calculated identically.

 

 

First of all it is clear that Australia has warmed over the last 100 years. However the net warming in the raw data is about 0.3C less than that advertised by ACORN-SAT. The selected 112 homogenised stations have increased the apparent warming by about 30%.

Figure 2: Comparison of the averaged raw temperature data  to the homogenised ACORN-SAT data

Here is an animation of the raw spatial data showing large year to year variability , but with an overall gradual warming. Note how the mix of stations is constantly changing. This is simply because each one has a different time range. Cities like Sydney and Melbourne have 4 or 5 stations each contributing.

In the previous post I tried to show using 3 examples that homogenisation algorithm itself is responsible for a slight increase in warming trends. I was then accused of cherry picking. Since then I have looked at a further 18 ACORN stations. Of these none show a reduced trend while 12 show clear linearised increases. Here is one example. Click reload if animation does not rerun automatically.

Conclusions.

The raw temperature data from 1805 stations shows that Australia has warmed by about 0.8C over the last 100 years. This is about 0.3C less than ACORN-SAT.

Homogenisation of the data of the 112 selected sites results in a linear increase of apparent warming with time. This is visible in ~70% of the stations. The pair-wise comparison algorithm is the likely cause, because it adjusts station data so as to agree with nearby ones. An analogy would be like placing a weak magnet under a paper containing iron filings.

This entry was posted in AGW, Australia, climate science and tagged , . Bookmark the permalink.

37 Responses to Australian Raw temperature results

  1. paulski0 says:

    Homogenisation more often then not increases warming trends.

    I assume this is meant to refer to Australian data, or to the global land-only record? In the global land+ocean datasets homogenisation overall reduces the warming trend.

    Perhaps you’re building up to something but I’m not sure what point you’re making at the moment? As far as I’m aware everyone agrees that homogenisation can significantly change regional trends – there would be no point including the process if it didn’t. And everyone agrees that in Australia the net effect of homogenisation procedures (independently NOAA, ACORN and Berkeley. others?) is an increase of the overall amount of warming. Or to put another way, a reduction in spurious cooling found in the raw data.

    To the extent there is a real debate it is how well current homogenisation procedures extract the true climate signal from the raw data. Simply pointing out that homogenisation has an effect doesn’t add anything to that debate.

    As far as I understand things that effect vector shouldn’t be too surprising given general patterns of how stations change, similar to the US situation.

    • Clive Best says:

      Yes – so far this only applies to Australia and just land data.

      I think it is wrong to assume that there is any ‘true’ climate signal to be extracted from the raw data through homogenisation. That is an a priori assumption which then prejudices your choice and tuning of algorithms. For example Berkeley assumes that temperature is a smooth continuous function of space and time. Perhaps it isn’t. You see large year to year and region to region variations in surface temperatures.

      You know yourself that there are external cyclical effects on surface temperatures.

    • Ron Graf says:

      And everyone agrees that in Australia the net effect of homogenization…is an increase of the overall amount of warming. Or to put another way, a reduction in spurious cooling found in the raw data.

      On what basis is the raw trend considered spurious? Is not proof required to make this an apriori assumption?

      Clive believes that the clearly demonstrated bias is an inadvertent effect of pairwise analysis for adjustments. I agree that propagates the bias but I believe it’s actually from the moving of stations after they gradually become too infected with non-climate effects to continue. All the non-climate effects get cleaned out initially in the move to a fresh location but instead of allowing the correction of the record the new break in the trend is “fixed” by adjustments which re-cement the non-climate effects into the record.

      Non climate station warming as acknowledged by all but it seems like wasting any warming is taboo, whether it is a newly installed nearby HVAC condenser or expanded pavement or local urbanization.

      • Ron Graf says:

        Supporting my theory, one can note in Figure 2 above that the divergence substantially begins in the mid 1970s when station moves out of the urban areas was adopted as an ongoing intention.

      • paulski0 says:

        On what basis is the raw trend considered spurious?

        On the basis that it is identified as being spurious by the homogenisation algorithm. You may disagree about whether these procedures are returning accurate results, but surely you recognise that the purpose is removal of spurious artifacts from the data?

        Is not proof required to make this an apriori assumption?

        It’s not an apriori assumption, it’s a result. Homogenisation algorithms have been repeatedly benchmarked against equivalent synthesised temperature data with induced errors and found to provide accurate corrections towards known true values. When applying these validated procedures to the real-world raw Australian data they indicate an overall spurious cooling in the raw data.

        Clive believes that the clearly demonstrated bias is an inadvertent effect of pairwise analysis for adjustments.

        Clearly that is his belief, but my point is that simply demonstrating the difference between raw and adjusted does absolutely nothing to support or test that belief.

        • Ron Graf says:

          You may disagree about whether these procedures are returning accurate results, but surely you recognise that the purpose is removal of spurious artifacts from the data?

          Yes, I agree that the stated purpose of the adjustments are to remove artifacts. I disagree that this is a sound practice when the subject of interest is the signal created by the population of stations. The continuity of any particular individual station is superfluous in a chaotic system. The more important statistical principle is to not to meddle with data. The effects of periodic station relocation in a population of 1800+ stations is a random error to the population and thus requires no adjustments.

          However, systematic changes do need to be adjusted for. For example, the introduction of the Stevenson screen thermometer housing in Australia in 1910 lowered the Tmax significantly across the station population.

  2. Olof R says:

    Your results look similar to Berkeley Earth raw minus adjusted for Australia

    https://twitter.com/rarohde/status/973003919772315648

    However, looking at the global land average, the adjustments decrease warming trends the last 60 years

    • Clive Best says:

      “the adjustments decrease warming trends the last 60 years”

      That isn’t true. Here is a comparison between GHCN V1, V3 (uncorrected) and V3C

      Globally temperatures were also increased after 1970. It may be the Berkeley Earth has thousands of extra random stations which affect their trends. They are proud of boasting about having more sources than anyone else. This is not the case in Australia as I think I have all available stations.

      • Nick Stokes says:

        “Globally temperatures were also increased after 1970. “
        Using GHCN-M, I get very similar results to Robert Rohde, here. The plot of effect of adjustment is here, with the global land being the top red curve:

        Robert’s broadly similar result is here

        Your plot seems odd – it shows 1998 as the warmest year even using adjusted GHCN.

      • Olof R says:

        Clive,
        I don’t think you are using area weighted averages ( as in an operational dataset), but I think Nick is doing that.
        In the high Arctic, for instance, there are a few stations influencing large areas, and they are adjusted down, very much by GHCN and less by BEST.
        I guess that the “conservative” homogenisation algorithms find the fast Arctic warming suspicious. BEST is less affected because they have more stations which convinces the algorithms that the warming is real

      • Nick Stokes says:

        Olof,
        As I understand, he integrates the linear interpolate over the triangles, which is equivalent to what I do. But the results definitely look wrong, and I think the reason is what I drew attention to below; the treatment (or not) of land boundaries. Unless you are careful, an attempt to mesh a land area will include lots of sea area. That is area used to weight the land stations, and in quite wrong ways. For one thing, it greatly overweights coastal stations. And erratically so. It will give exceptionally large weights to remote islands. This is area weighting, but weighting to the wrong areas.

        I have a gloomy feeling that he may in fact have triangulated the whole ocean area and be assigning that weight to various coastal stations. What I do in this situation is triangulate with the SST grid as well. Then I eliminate triangles that include SST points. There is a more elaborate treatment described here.

        • Clive Best says:

          Nick,

          I should probably remove the 3 remote island stations, but otherwise don’t really see anything wrong. There would then remain Tasmania and a part of the Great Australian Byte. However these are always land values that are triangulated. There are inland triangles which would be just as big.

          More importantly I calculate the ACORN-SAT anomalies using exactly the same method so the comparison is always apples with apples.

        • Nick Stokes says:

          Clive,
          My gloomy feeling there is about the global plot, GHCN3 vs 3C. What is done there to restrict the triangulation to land only?

      • Clive Best says:

        Nick,

        This is what I find for the effect of adjustments on GHCN-M global land values. I am always using a normalisation period of 1961-1990. In this old case I am using the CRU PERL software to perform 5 deg binning. Adjustments do increase trends post 1970.

        • Olof R says:

          Clive,
          Maybe 5 degree binning isn’t enough compared global area averaging. In your example the effect of adjustments seems quite flat after 1975, which means that there is little effect on the trend after 175.

          Lock at Nicks old gadget:
          https://moyhu.blogspot.se/2015/02/homogenisation-makes-little-difference.html

          There is very little difference between TempLSgrid and TempLS grid adjusted after about 1975 (I think these are 5 deg binned)

          However, the differences between the globally area weighted TempLSmesh and TempLSmesh adjusted are much larger, and the trend of the unadjusted TempLSmesh is larger after about 1965

          Disclaimer: data and GHCN versions are not the most recent, and are also diluted by SST..

  3. Bryce Payne says:

    Clive,
    It is not a foregone conclusion that the use of any summary statistic will obscure details and lead to generalized numerical conclusions that may or may not, depending on the purpose and appropriateness of the method used to that purpose, provide “accurate” conclusions with respect to the espoused purpose? Should not one assume that two different statistical approaches will support somewhat (or very) different conclusions depending on the strength and consistency of the data set and the differences in the methods used? Climate is a complex phenomenon, with the most influential factors varying by location over time, let alone the nuances of interactions among those factors. That having been said, is there a meta-conclusion, say, for example, that both homogenized ACORN and raw data triangulated mean lead to the conclusion there has been a clear warming trend?
    Continuing from a somewhat different perspective, the difference between the methods collapses to zero somewhere between 1972 and 1979, depending on which figure one looks at. Not surprizing, of course, except that any actual real-world trend in the temperature does not likely have a seemingly arbitrary centralized value in the 1970s. Rather, there will be a real-world a trend or trends (cooling, stable, or warming) in temperature before and after any arbitrary summary central value date (assuming the summary has some fidelity to the original data set). If so, can that trend and actual inflection points in the trend be identified and used to better describe, hence, understand, what we are all, rightly, so exercised about?
    For example, in both your figures, but easier to see in the second, there appears to be a real inflection from a relatively stable temperature period (1900s-1950s) to a consistent warming trend (1950s-1990s) then on to a possibly accelerated warming trend since early 2000s. This seems to be in agreement with other summary methods, e.g., various animated global temperatures history maps. Interesting that the two earlier trends were about 40 years each, which seems a little discouraging in that it suggests we might not be able to confidently discern if impression of an accelerating warming rate since 2000s is only that, an impression.

    • Clive Best says:

      Bruce,

      “is there a meta-conclusion, say, for example, that both homogenized ACORN and raw data triangulated mean lead to the conclusion there has been a clear warming trend?”

      : Yes !

      The reason the trend drops to zero around 1975 is due to the normalisation baseline of 1961-1990. This is one of the foibles of using anomalies rather than absolute temperatures. I actually wonder sometimes whether we couldn’t use absolute temperature averages for a single continent like Australia which has no high plateaus etc.

      I agree that the period before 1960 shows no real trend but appears to be cooler. I would not expect CO2 induced warming to be noticeable much before 1960. The global data actually shows a warm spell between 1930 to 1960. This is not evident in Australia.

  4. Nick Stokes says:

    Clive,
    I notice your triangulation includes huge amounts of sea. This greatly upweights the stations involved, particularly Lord Howe and Cocos Islands, and odd stations in Tasmania. I think remote islands just have to be omitted – better give them zero weight than make them several % of the Australian total.

    For triangulating situations like the Bight, I recommend putting a fake station in the middle, and then removing all triangles that connect to it. That should leave a triangulation conforming much better with the coast. You can use several stations.

    It greatly adds too to the issue that you have a constantly changing mix of stations. If these way overweighted stations drift in and out of the mix, anything can happen.

    • Clive Best says:

      Nick,

      I thought you might notice that 😉

    • Clive Best says:

      Nick,

      I agree that those Islands should be removed, although I think Tasmania should stay. Thanks for the proposal to use fake stations to guide the triangulation around the coast. Sounds like a good idea.

      Do you know what BOM use to calculate ACORN-SAT annual temperature anomalies ?

    • Clive Best says:

      Nick,

      I redid the calculation removing Lord Howe and Cocos Islands. The result is shown below. It has a very small effect but it does boost 2014 max temp.

      “I have a gloomy feeling that he may in fact have triangulated the whole ocean area and be assigning that weight to various coastal stations.”

      No this is not true. I only use stations on land coordinates. There are many islands just off the Queensland coast which I think should be kept. Then we are left with the Bight area which can get partially spanned by coastal stations, likewise Tasman Sea. However there are larger areas inland with no stations, so I am two minds as to whether it matters. I would need to do more work to input fake stations just off the coast as I am using Bom’s metadata rather than my own.

      and here is the proof.

      • Nick Stokes says:

        Clive,
        My gloomy feeling is about the global plot, GHCN3 vs 3C. What is done there to restrict the triangulation to land only?

        The new map is certainly better. It still greatly overweights places like Willis Island, in the Coral Sea. The problem with overweighting is partly inaccuracy and partly inconstancy. It becomes a big deal whether that station reports or not.

        Here (from here) is the triangulation you get by including SST grid points.

        Dark orange triangles have all 3 nodes on land, light o have 2, light blue have 1 etc. I would generally use both light and dark orange as representing Australia, but you can choose. Excluding a marginal triangle means underweighting some land station; including means overweighting. But the triangles at issue have smaller area.

        Here is what can be done with land masks and some grid refinement:

        • Clive Best says:

          The GHCN3 vs 3C comparison did not use any triangulation. It was done 3 years ago and I think I used the same PERL scripts that CRU were using at the time for CRUTEM4. So it is binned in 5×5 degrees and then area weighted averaged.

          I like the way you have done the grid refinement. The poor people of Hobart though might be a bit upset to be left out of Australia !

    • Ron Graf says:

      Nick, I agree with you that the islands should show more sea influence in their trends than they due. Do you now agree that they do not?

      • Nick Stokes says:

        Ron,
        I commented above. A problem with islands is that it isn’t clear what area they should be said to represent. Their actual land is often negligible. They help represent an area of sea, but then is that really Australia?

      • Ron Graf says:

        Nick, my point was that it appears to me in Clive’s graph that taking out islands has minimal effect. If the islands are behaving more like continental Australia than the more temperature stable sea surrounding them I would think you agree that to be curious. If one was comparing the islands to sea, coastal and inland. What do you predict on the trends of each to be relative to each other?

  5. Bryce Payne says:

    Clive,
    You stated, “I actually wonder sometimes whether we couldn’t use absolute temperature averages for a single continent like Australia which has no high plateaus etc.”

    For what it is worth, I think absolute temperature data were too readily abandoned in the first place. There are relatively straight forward statistical approaches that as nearly as I can tell have never been pursued. I look forward to having time to do so one of these days.

    • “There are relatively straight forward statistical approaches that as nearly as I can tell have never been pursued. I look forward to having time to do so one of these days.”

      Here’s one. Capture the ENSO signal and use that as a calibrating factor. They should be doing that with all the satellite data such as RSS and UAH.

  6. Bryce Payne says:

    Paul, Clive, et al.,
    I would appreciate a bit of help to get me started on a more traditional statistical approach to analysis of absolute temperature data. Some years ago when I first took interest in (and was then pulled away from) the subject I ran into a paper that discussed various publicly available climate data sets. As I, to my mind now, clearly recall there was mention of a historical absolute temperatures data set that had been developed to assess how many long term weather (temperature) data stations there were around the world. There was mention that after considerable evaluation a number in the low hundreds of such stations were identified that had never been relocated and had longer term (late 1800s to then present) data. I lost track of that information and how, if it is possible, to obtain the identities and data from those long-term, location consistent stations. Anyone have any idea what I am recalling and how to find it again?

  7. Pingback: Australian Raw temperature results – Climate Collections

  8. Bryce Payne says:

    Ron,
    You wrote, ” The continuity of any particular individual station is superfluous in a chaotic system. The more important statistical principle is to not to meddle with data. The effects of periodic station relocation in a population of 1800+ stations is a random error to the population and thus requires no adjustments.”

    To your points, yes, absolutely, and maybe not.

    Whether or not periodic relocation in a population of 1800+ (or any other reasonably large number of) stations requires adjustment is a question of degree, is it not? Indeed, below a certain frequency of relocations, the effect would be negligible, but eventually the frequency will have an impact, and if relocations are not random, then the frequency needed for impact will be less. To further complicate, since there is no reference population that would be any more statistically reliable, there would be no way to determine whether there has been an impact.

    Actually, if one wanted to pursue your argument to its limit, a complete radical randomization approach might be worth exploring. That is, if one presumes, as your point seems to imply, that basically all stations in Australia are in Australia and therefore contribute legitimately to the population mean for Australia, then once a reasonable minimum number is found (or a more than sufficiently large number presumed), say several hundred stations, it should not matter which several hundred are used to calculate the mean, as long as they are randomly selected for each calculated mean every time. Perhaps an interesting exercise would be to look at the means of randomly selected reasonably large sub-population samples. Each randomly selected sub-population will contain effectively random relocations with respect to every other such sub-population. If your proposition holds, then multiple, random, large (enough) sub-population means should be effectively the same. Whether they turn out to be the same or not, comparisons of those means might provide an interesting perspective on the inherent variability in the mean due to random location drift within the population, effects of sample size on the sample mean, etc. Boundaries of the climate system (in this case Australia, or some defined portion thereof) would have to be maintained, at least for an initial exploration of the concept anyway.

    Perhaps someone has already done this exercise in some form…..? Or, perhaps someone has explored the implicit question of randomness in climate data. Can climate data from within a designated climate system can ever be actually random with respect to the same data from somewhere else in that same climate system?

    • Ron Graf says:

      Perhaps someone has already done this exercise…

      Jones et al (2016)
      Estimates (using both observational data and globally complete climate model data) indicate that the effective number of independent observations at the monthly timescale for the global surface area is about 100 (see Jones et al., 1997). Thus, provided input datasets have at least 100 well-spaced sampling points for which the data are relatively free of nonclimatic biases, even if the locations of these sites are different between the different groups, they will lead to very similar large-scale area averages. For annual or decadal averages the required number of well-spaced locations can be substantially less than 100.

      So if Jones is correct one only needs a handful of Australian stations free of non-climate effects to be a representative sample on a decadal timescale. The question then becomes how does one determine non-climate effects. My answer would be to look for stations that fit the profile of suspicion and compare this suspect population against the most innocent looking one. This has also been done by Karl (1986), Hanson (2001), Peterson (2003), [see Parker 2010] Hausfather (2013) and others.They all came up with vastly different answers. Karl found significant UHI directly related to population but could not provide a correction formula. Hanson found less UHI but provided a correction (that he did not end up following.) Peterson found urban stations behaved the same as rural, so problem solved. The later is what convinced a frequent blogger from Berkeley Earth to set aside his long-standing concern that UHI was a significant uncorrected systematic bias after Matthew Menne demonstrated Peterson’s approach. I still haven’t figured out the trick unless non-climate effects grow almost naturally everywhere thus are only visible when comparing local neighbors directly rather than their trends.

  9. Bryce Payne says:

    Ron,
    Thanks for your concise, reference-dense response. Excellent.

    You stated, “So if Jones is correct one only needs a handful of Australian stations free of non-climate effects to be a representative sample on a decadal timescale. The question then becomes how does one determine non-climate effects. … I still haven’t figured out the trick unless non-climate effects grow almost naturally everywhere thus are only visible when comparing local neighbors directly rather than their trends.”

    And, not much more than a handful of stations free of non-climate effects on monthly or annual time scales.

    Clearly you are correct with respect to the practical constraining question, determining which sites are affected by non-climate effects. Still, as a novice, it strikes me that in Australia, and probably many other reasonably large areas, there surely must be in the 10s if not 100s of stations that are relatively isolated from known or suspected non-climate effects over relatively long time frames. Focus on those stations, with appropriate corrections for systematic changes, e.g. screened thermometers, would seem to provide a most reliable, and definitive picture of temperature changes based on absolute temperature statistics. Still seems to me like such an analysis and presentation of global temperature change over recent historical time begs doing and would put to rest any remaining arguments that temperature is not increasing.

    The work of Jones (2016) and others all seem to raise the subtle but seemingly obvious point that receives repetitive thrashing in some circles: which time scale matters. Given that the various records all lay out a consistent temperature track for last 150 years or so, and that track indicates a powerful role for El Nino/La Nina and large volcanic events (Jones) that account to a considerable degree for cyclical variations in the temperature track. It seems to me quite apparent that the time duration of warming legs of the cycle are lengthening while the cooling legs are shortening, so much so that the “hiatus” was noted with more than a reasonable amount of enthusiasm.

    Methane is a major area of interest for me, and its role has yet, in my opinion, been adequately accounted for. Given the relative reliability and consistency of the temperature track is now approaching its likely practical maximum, perhaps it is time to look at correlations between more atmospheric composition details and variations in the temperature track.

    • Ron Graf says:

      “Still seems to me like such an analysis and presentation of global temperature change over recent historical time begs doing and would put to rest any remaining arguments that temperature is not increasing.”

      There will always be argument but it seems like the temperature record is a particularly sensitive area for a credentialed scientist that is interested in a career in government science. Thankfully Clive is giving us a peek under the hood to look around.

      The paper I cited as Karl (1986) correctly should be Karl (1988) Urbanization: Its detection and effect in the United States climate record. And the scientist that convinced the Berkeley Earth student here was Rhode, not Menne.

      The other establishment argument is that UHI only affects land, which is only 30% of the globe. My answer is that the SST record is crappy and heavily relies on land station ratio for credibility. If the land warming were found to be 0.8 C per century there is no way the SST trend could be the same and the models also maintain the little cred they still have. All of the sudden we are looking at an effective ECS of 1.2-1.5C rather than 2-4C.

      Here is a paper on Urban heat island features of southeast Australian towns to take a look at.

  10. Bryce Payne says:

    Ron,
    I never had the impression that no one was peeking under the hood, rather that there just was not anyone interested enough to open it and take a serious look. In my area of work I have found that there are typically reams of data on most subjects and situations that can provide truly informative forensic insights into what has happened and why. It has also been my experience that very little of such data is ever actually examined, unless there is a crisis or legal action. To make matters worse, when such critical or litigation situations arise, those that are engaged do the looking are biased, if not by personal agenda, then by professional and licensing obligations–a realization that took me many years to functionally recognize and accept (it is one thing to say that such bias is the case, quite another to actually recognize it and admit its consequences). Ah, but I wax poetic…

    I should point out here that I have always had misgivings about the various climate models and their projections, but in the opposite direction. I cannot see why any particular amount of temperature change can be somehow reliably accepted as intolerably risky, and another somehow a socio-economic adaptation walk in the park. I work in an interdisciplinary realm where one benefits from recognizing that threshold surprizes can and do surge up out of the interplay of biological, chemical and physical systems. Sometimes they are wonderful, and sometimes they are terrible. To cut to the chase, it seems logically inconsistent to me to argue that the current climate models are likely incorrect and at the same time accept projections from those same models regarding how many degrees C of temperature change should or should not be seriously troubling. The data clearly shows temperature is increasing, we can argue about rate but to what end? The data clearly suggests that rise is due to human behavior, mostly access to and use of fossil fuels. We could argue over the height of some theoretical temperature plateau, but our predictions of such are all actually competing conjectures. From my perspective, it is fundamentally important to reliably resolve the question of is temperature changing, how much and how fast, not because that change per se tells us anything actionable other than that it is occurring and that we should, therefore, consider that the trajectory of the earth ecosystem is going to change, we are unlikely to be able to foresee how, but in all likelihood it will be unforeseeably costly and painful.

Leave a Reply