Updated 11 March
Here we look at some subtle changes in trends that result from homogenisation. It is often the case for long time spans that a station has either been moved or else two different stations in the same location have been combined. In this case a small offset in temperature values is to be expected. The objective is where possible to create a continuous temperature series from 1910 to the present time. However in every case I have looked at the homogenisation procedure itself has extended way beyond the join and always increased the apparent warming. Here are three examples.
- Launceston, Tasmania. The ACORN time series is actually a combination of 3 nearby sites in the city. a) Launceston Pumping Station from 1910-1946. b) Launceston Airport (original site) from 1939-2009 and c) Launceston Airport (current site) from 2004 onwards. The animation below consists of 3 frames. Frame 1 is the raw average temperatures (monthly and annual) from the 3 sites is the middle trace in 3 colours – red, green and pink. Above is the maximum annual recorded temperature and the bottom trace is the minimum recorded temperature. Frame 3 is the same thing for the homogenised data – single red colour for monthly, blue for annual, purple for maximum and light blue for minimum. Frame 2 is a mix of the two.
The joining of the 3 bands at first sight looks to be fine, but closer inspection shows that the maximum and minimum temperatures in the central section are differentially being shifted so as to produce a linear rising temperature trend where there was none apparent before. There are no obvious kinks in the raw data to justify this.
2. Alice Springs
A similar animation is displayed below for Alice Springs
Alice Springs consists of a merge between the Post Office station (1910-1953) and the Airport since 1953. Note how the animation shows an increase in minimum temperatures at the airport resulting in a linearisation of the trend, neither of which has any direct connection with the merge.
3 Darwin
Darwin is tropical and shows little warming since 1910, but here again we see small adjustments extending up to 1980 from a merge in 1940.
Darwin is a merge of 3 different sites, the post office before 1942 and two airport sites. This should be a straightforward linear shift between the two stations but again the shape of the early data is completely changed producing a small linear warming trend.
Conclusion
There is evidence of warming in the raw temperature measurement data. However, I strongly suspect this has then been boosted in ACORN-SAT by their ‘homogenisation’ process. My guess is probably by about 33%. This must also apply to GHCN, CRUTEM and BEST, since they all basically use the same algorithm.
Clive, I think you need to explain more what you are plotting here.
You are absolutely right !
I have updated the description and hopefully the animations now run automatically.
It is amazing to see how many adjustments are typical. On Darwin one wonders what is going on in the minimum temperature line in the 1950s and 60s. This is well after the 1940 move. What could justify changing the minimum temperatures without touching the max? There should be no TOBS issue.as I believe all of Australia remained consistent at 9am with the daily 24-hr recording of Max/Min. Also one Darwin one can see a clear non-climate issue at 1940, right at the transition in the Min temperature line. Non climate events are almost always extraneous warming influences as landscapes become more full of power sources and non-absorbent surfaces. In Darwin’s 1940 case though it looks like an instrument wearing out.
Some time ago I looked at the raw data for Cape Naturaliste, a lighthouse on a promontory on the South West of Western Australia, where instruments may have changed but their location has been the same since 1910. The raw data is the minimum and maximum daily temperature. I am intimately associated with the vagaries of the local climate because I have been a grape farmer here since 1980 and lived in the vicinity.since birth.
My review of whats happened is here: https://wordpress.com/post/reality348.wordpress.com/26227
Later, I looked at the Acorn Data. Large adjustments to the minimum temperature in the early years produced a series with a very different complexion.
I have no idea why the two series are so different. In the absence of a believable rationale for these changes I can not have any confidence in the work of the BOM when it comes to the assessment of climate change. .
The ocean basins warm at very different rates as I discuss here: https://wordpress.com/post/reality348.wordpress.com/31895
There is good reason why this is so, and it has nothing to do with the proportion of carbon dioxide in the atmosphere.The natural forces in the climate system produce change in climate from year to year and decade to decade that are greater in degree than the change seen on seventy year time scales. We simply don’t have good data for most of the globe, and the southern hemisphere in particular to draw valid conclusions about what is happening over longer time scales.
We are still getting to grips with the forces underlying natural climate change. Until we can understand the forces underlying short term change at the decadal intervals we can have no confidence in suggestions as to what is going on over longer intervals.
Institutionalised climate science is unprofessional.
For Darwin, it’s not the temperature but the atmospheric pressure that holds the key to understanding climate variability. Darwin is 1/2 of the SOI dipole (along with Tahiti) and with a more detailed a view of the time-series you will find that it tracks a nonlinear amplification of the yearly signal with the lunar tidal forcing. Not for the faint-hearted because one needs to solve Navier-Stokes equations along the equator, and also need to resolve the lunar and solar orbital ephemerides rather precisely.
Climate science is a rigorous profession because it builds on the work of the science before it, see the work of Laplace, years of effort in conventional tidal analysis, NASA JPL, and the recent work on topological insulators which may lead to even more breakthroughs
Roger Andrews took a look at Alice Springs and surrounding stations some years ago. 2C of manufactured spurious warming. Not pretty:
https://tallbloke.wordpress.com/2012/10/11/roger-andrews-chunder-down-under-how-ghcn-v3-2-manufactures-warming-in-the-outback/
and
https://tallbloke.wordpress.com/2012/10/16/roger-andrews-station-quality-at-alice-springs/
Since climate did not catch my attention until 2014 I had to go back read old posts to try to figure out what got settled, if anything. Since 2012 Anthony Watts appears to be the only one out of McIntyre Curry and Watts still posting on homogenization effects. I had to go back to here in 2012 to get Mc’s last word on station adjustments. I don’t follow Jo Nova but it seems the BoM’s methods has been a lively topic. Here is Jo Nova’s last September update on the ACORN SAT “major revision”. Not pretty.
After more than a decade of Watts’s call for making temperature stations fit for the purpose of global climate monitoring Phil Jones finally agrees. https://wattsupwiththat.com/2018/03/02/alarmists-throw-in-the-towel-on-poor-quality-surface-temperature-data-pitch-for-a-new-global-climate-reference-network/comment-page-1/#comments
Climate scientists have never controlled the temperature monitoring process. They analyse what data they can get. Phil Jones or any others would always have recommended measurements designed for purpose. They had some success when USCRN was created 15 years ago. This is their wish list globally.
To have an ideal is not to say that what we do have is inadequate.
Alice Springs is an example of climate data fiddling that has been looked at several times before, for example by Paul Homewood.
https://notalotofpeopleknowthat.wordpress.com/2012/03/26/alice-springsa-closer-look-at-the-temperature-record/
and by me, showing that Alice Springs illustrates the insane instability of the GHCN adjustment algorithm.
https://cliscep.com/2017/02/06/instability-of-ghcn-adjustment-algorithm/
“that has been looked at several times before”
Yes. I thought it was quite a coincidence that the celebrated cases of Alice Springs and Darwin just happened to pop up in this random choice.
Nick, if an auditor looked at your books and found bogus transactions would you accuse them of cherry picking? When they asked you for the detailed methodology would you say, “It’s complicated but trust me?” Of course, not.
I’m sure all the government administrators are good people but there is no reason government collected climate data and their adjustment regimes shouldn’t be completely transparent — no national security or privacy concerns — no need to ask for public trust. In fact I don’t think anyone disputes that citizen audits have been responsible for correcting past climate data collection and processing errors. I’m sure you agree.
Ron,
“found bogus transactions would you accuse them of cherry picking”
There is, as usual, no evidence of anything bogus. It’s the customary story of, look, they changed something and the trend went up! But if you adjust numbers, it is very likely that a consequence will go up or down, and it is very easy to cherry-pick just the ones that go up.
In the original kerfuffle I posted this histogram of the effects on trend of the GHCN V2 adjustments, and where Willis’ “random” choice of Darwin fitted in.
Nick, If you or I were presenting our work product to the government (instead of the other way around) we would be required to follow standard practices or reveal supporting diagrams like the one you have above.
Who created your histogram? Do you have one marked for each station? Why adjust individual members of the population for random error when one is interested in studying the population as a whole?
What examples of systematic error are there that cool recent years requiring the population to be adjusted?
Have non-climate effects been a warming or cooling bias for the population?
Ron,
“Who created your histogram? Do you have one marked for each station?”
I did. And if you follow the link, it explains how, and gives the R code that produced it. It has, as it says, one entry for each station with more than 80 years data.
I think there are two different effects.
1) splicing together data from different stations covering different time periods at the same location, For example Launceston has 3.
2) Homogenisation using various pair-wise nearest neighbour type algorithms.
GHCN (uncorrected) only has data processed after step 1 has been performed. I am finding that most of the trend changes occur at step 1 and these are always positive.
Thanks Nick.
I had no idea Darwin was a celebrated case. I am away for a few days but when I get back I’ll automate all of them. I am actually doing something different to Berkeley and GHCN. I calculate the daily average (tmin + tmax)/2. I then average all the daily Tav values in one month and in one year. The monthly Tmin is the lowest Temp that month. The annual Tmin is the lowest Temp that year.
I discovered in a tweet exchange with Zeke Hausfather that they calculate the monthly average tmin and tmax. They then calculate the monthly tav as
I had no idea Darwin was a celebrated case. I am away for a few days but when I get back I will automate all comparisons. Incidentally I have discovered that Berkeley calculate Tav in a different way. I average together all the daily values. They first average tmin and tmax and then calculate tav. My Tmin is the absolute minimum temp that month/year.
P.S. It is such a pain typing on an iphone!
Currently, GHCN adjustments are fabricating about 2.5 degrees of warming at Alice Springs:
https://data.giss.nasa.gov/cgi-bin/gistemp/stdata_show.cgi?id=501943260000&dt=1&ds=5
This is what the hourly (every 3 hours) raw temperature measurements give for Alice Springs Airport. For each day I calculate Tmax, Tmin, then the daily average, monthly and annual averages. Looks pretty flat to me.
I just looked again, a few days later, and the faked warming is now “only” about 1.5 degrees. So the GHCN adjustment algorithm is still dysfunctional, illustrating wild instabilities, as I pointed out in my cliscep post a year ago, linked above.
Paul,
The Pairwise comparison algorithm they use is akin to placing a weak magnet under a paper containing iron filings !