How should you calculate annual global temperature data? We start with Hadcrut4.5 gridded monthly data defined on a 5×5 degree grid and then form the global area weighted average in 3 different ways.
- Integrate the annual anomaly over month,lat & lon all in one go, weighting by cos(lat)
- First calculate the yearly averaged grid. Then integrate this over lat, lon grid points weighted by cos(lat)
- As 2. but first calculate the NH average and the SH average. Then calculate for GL = (SH+NH)/2
This is what I get

3 ways to calculate the global annual average from monthly data compared to the official 0) is H4 published annual series.
There is a systematic difference between the 3 methods. Hadcrut4.5 seems to be using the single pass average as this gives almost identical results to mine – but not quite. Most differences in the 3 methods are concentrated in the years before 1950, when the distribution of measurements was sparser, and mostly concentrated in the Northern hemisphere.
The Land only temperatures CRUTEM3 used the (NH+SH)/2 method until about 2011, but then switched to (2NH+SH)/3 for CRUTEM4, the argument being that the NH contains about twice the land surface of the southern hemisphere. If we now also apply second method to the latest GCHN V3C station data, then this is the result.

Comparison of V3C temperature anomalies calculated using CRU’s gridding and averging software to CRUTEM4 annual values.
Again before 1920 there are systematic differences between the two datasets. In my opinion this difference simply represents an underlying systematic error when calculating net warming from preindustrial times. This error is about ±0.15C and is in addition to statistical sampling errors. So I estimate that the earth has warmed by 0.85±0.25 C since 1850.
I think you are showing that the result you get depends on the answer you want to achieve. The quantity in question is an absurdity so you are free to choose any method to calculate it. But it remains meaningless
Clive, I noticed that CRU chose the method producing the steepest trend for their HadCRUT4.5. It is a point that is not often made that just because a method is valid it does not eliminate other valid methods of comparison, as you demonstrate so clearly. We can just imagine all the adjustment choices. One wonders how much time is spent experimenting to find the warmest plausible option.
Can the land surface record can diverge from the satellite trend for lower trop indefinitely? It will be interesting to see which one has to acquiesce.
Ron,
Yes I think this is the key point. All recent changes, corrections, extrapolation into polar regions etc. all go in the same direction – to increase apparent warming over the last decade. Whether these adjustments are individually justifiable is not really the point. The strong impression is that motivation is to look for reasons why the hiatus didn’t really exist.
The averaging algorithm used is one example. Kriging is another – by connecting all points with a bed of springs ensures no surprises.
Completely agree with “man in a barrel” and Ron Graf, the land-based databases are so massively manipulated/distorted (with so-called “administrative adjustments” and other arbitrary alterations (e.g. when to include/exclude collection cites, etc). Since the IPCC models’ credibility was crushed by the real world data (e.g. the “pause” starting in the 90’s), the manipulations, or perhaps more accurately, molestations of land-based data have made them utterly meaningless.
Most disturbing are the manipulations starting around 2005.
I do not know why you keep omitting the satellite data (which is unmolested), and which has been
brought to your attention before. Please factor in at least here
(http://www.drroyspencer.com/latest-global-temperatures/) and here
(http://www.climate4you.com/Text/Climate4you_December_2016.pdf) for starters.
Moreover, the “administrative adjustments” are applied over the entire time-series. That is, most every month, the entire 100+ years of land-based data is “administratively adjusted”. What a surprise :-), almost without the exception, the more recent temps are “adjusted up”, while the early temps (e.g. 1900) are “adjusted down”, this serves both to manipulate recent dates to pretend that global warming is upward (rather than the truth = mostly side-ways), and ALSO, to rotate the entire time-series to further distort their (IPCC et al) story that the planet is warming faster than ever, and exaggerates the correlation with Mona Loa (which as another can of worms, but for another day).
This is clear and wilful dishonesty by NASA/GISS/NCDC et al. The Hadley data is also molested in very much the same way, but to a slightly lesser degree (if I recall correctly, the Yanks and the Brits have slightly different ways of guessing/interpolating for the Arctic cap etc).
The discrepancy between Hadley/CRU sea surface temp is much bigger still, when compared to actual readings.
I really wish you would stop paying lip service to the IPCC et al. By merely using their data gives
the impression that you believe it … are you really a scientist? If so the rules are:
1) Get data
2) Verify the data !!!
3) Check if the nature of phenomenon can be determined even with high integrity data
… once we can get past these, there is a possibility (but no guarantee) of a scientific discussion, and we are some many decades away from even the first two steps.
Incidentally, once you research the exact types and nature of the various “administrative adjustments”, you might possibly wish to give those “hand waving” arguments some value. Even if those manipulations had some scientific merit, which they most certainly do not, then we would be faced with another show stopper … if all that manipulation is really required (e.g. they have now nearly doubled the (land based) warming of the last 100+ years by shear manipulation) then what value does it have? That is, if every month you must mess about with the entire global time-series, and also 100 years later decide to double everything … or whatever … then the data is pretty much worthless (certainly in the UN/IPCC et al context).
But of course, the satellites don’t lie, so it is clear that the (land based) manipulations are highly suspicious, and at the very least, unreliable to the point of being useless.
Separately, you may have noticed that the IPCC et al have moved away from “proper modelling” (since those models were crushed by real data) to statistical/time-series modelling … that in itself is a sure sign of desperation, and the mark of a scoundrel. The move to statistical etc methods is primarily a kind of solicitor’s trick to allow to them to pretend that “some trend” is meaningful (when it’s bullocks), and especially also to then add variance envelopes to promote fear-mongering with concocted scenarios, to paraphrase PM Disraeli:
“There are three kinds of lies: lies, damned lies, and statistical modelling.”
… though it may not really matter much longer, since in most places on the planet where they have started to “go green”, electricity prices have quintupled, and they will increase much further. On top of the massive increase in production costs, now also the addition of new carbon taxes … in many places we are now starting to see old and poor, and small businesses really suffer and not too far away from a “revolt”. Electricity in many of these affected areas is now 20-35 US c/kW, compared to traditional sources at 7-10 US c/kW, and natural gas plants are (at the moment) producing as low as 4 US c/kW … in a few years the “all in” cost of “green” electricity will be 40 – 50 US c/kW, and maybe 100+ US c/kW if “storage” is added … that means the typical “Western/Developed” country with say, around per capita GDP of USD 50k/year will have seen their annual electricity bill go from about USD 1,200/year to USD 10,000/year or more … I think sometime before that the masses will be marching on government armed with pitch forks etc (or AR15’s for Yanks 🙂
… at that point, all these ridiculous pseudo-sciences won’t matter.
… though with Trump in the White House, and the possibility of similar “shifts” in France, Netherlands, Germany etc … perhaps some sensibility/practicality will put the worst off till later
And as well as all these factors to add or subtract as we/they feel fit, there is the accuracy of the base measurements. Do these well meaning folk really believe that their equipment and satellite measurements are infallible? The temperature rise, including systematic error and statistical sampling errors is 0.85C+/- 0.25C. I think we should see 0.85+/-0.25C+/- an apparatus factor (satellites can’t lie?) just like in the school lab! Then we might almost be down to 0.5C as a minimum.
Anyway, this is the stuff of sleepless nights unless you believe that Mother nature will take care of it as she has done for millennia.
Not sure what Bas had in mind with the remark “satellites can’t lie?”. If that was an intention to reference my earlier comment, then Bas has used (intentionally or not) two of the standard tricks for disinformation, amongst other things:
1) My actual statement was:
“But of course, the satellites don’t lie, so it is clear that the (land based) manipulations are highly suspicious, and at the very least, unreliable to the point of being useless.”
… here “don’t lie” (c.f. Bas’s “can’t lie”) clearly means something other than what Bas might have in mind.
2) Of particularly crucial importance: Bas’s statement is “cherry picked” and presented grossly out of context to the point of inappropriately alter the meaning. Clearly, my statement was made entirely in the context of comparing “unmolested” data to “molested” data. I don’t think it can be taken in any other way, and especially my statement can’t be taken to mean that there is no instrumentation error.
Having said so, Bas’s instrumentation error bars are completely swamped by many other issues such as the IPCC et al ignoring volcanoes and other massive phenomenon. For example, and as demonstrated many years ago, during the 20 the century there were four medium sized volcanoes that collectively had a (cumulative) cooling effect in excess of 2 C. Something similar can be argued for the 19th century. As such, making any “average” calculation for the purposes of forecasting 100 years out will be at least +/- 2C (c.f. +/- 0.5C) … and in that case, the entirety of the UN/IPCC et al context “implodes” since they claim that already the “sky is falling” for a 2+C rise.
Having said so, I agree with Bas that instrumentation verification is critical, and I had, at the out-set of my statements, emphasised that the first two necessary steps for science is data, and data verification.
As far as satellite “data quality” goes, I am certain that UAH has instrumentation errors. Though once again, the issue I took with Clive was his reliance on (and only) clearly manipulated data, and completely ignoring unmolested data, and which contradicts/disagrees with the manipulated data. Indeed, while the satellite data of temperature tend to be considered highly reliable, other types of data (e.g. radiation spectra) from satellites certainly do demonstrate instrumentation curiosities that may exceed the UN/IPCC et al all too important 2 W/m^2 CO2 “forcing”.
Moreover, the manner of averaging starts to move us away from what is actually important … which is the energy balance. Since Black- or Grey-Body radiative flux is a function of T^4 or T^3, any fiddling with the data can alter materially the in vs out flows from the planet (at least “massively” in the UN/IPCC et al “soil your trousers” over 2 W/m^2). Even apparently trivial issues can be important. For example, in many places the day/night temp differential can be 10- 15C or more. So that means “day side’s” out flux proportional to, say, (300K)^4, while the “night-side’s” flux is proportional to, say, (285K)^4, clearly those DO NOT average to (292.5K)^4. Similar problems creep in with latitudinal assumptions, assumptions of symmetry, etc etc etc . So simple and nested averaging over various locations and time scales “breaks thermodynamics”, and the averaging necessarily cannot conserve energy … again, the context is important, if you think an additional 2 W/m^2 means the “end of life”, then these types of averaging and assumptions may well introduce “errors” that exceed 2 W/m^2 etc.
In addition, and also as demonstrated previously, global/local temperatures and the climate are each aperiodic in character (i.e. have fractal attractors). Any averaging necessarily DESTROYS information. The massive and nested degree of averaging that so many employ in “climate science” is tantamount to profoundly altering the fundamental character of the dynamics into a completely different and unrelated phenomenon (e.g. the real dynamic has a fractal attractor, but the “averaged” dynamic may have a very different or non-fractal attractor … ie. a complete different phenomenon). Put differently, whatever it is you think you are modelling, all this averaging guarantees that you are NOT modelling the climate, or even temperature.
Crucially, and as I so often emphasise, it is absolutely crucial to state in advance the objective of your investigation and modelling etc. If Clive had said, “he guys I have three algorithms that produce these results, what do you think” … and did so outside of a highly controversially politically hijacked subject, and did so with non-contentious data, then that would be one thing. However, as it stands Clive seems to be involved in the politically hijacked subject of “climate change” and relying on highly contentious data, and in a way so as to give that dodgy data and those politicos masquerading as scientists some measure of credibility … which I find highly objectionable, as a scientist (i.e. if Clive declared his posts to be ideology c.f. science, then he is welcome to believe whatever philosophy suits him).
Finally, and MOST important of all, it does NOT actually matter what the global average is, or whether you can predict it (of course you cannot predict it as anybody with even a tiny knowledge of non-linear dynamics can prove) … what does matter is what effect these matters have on humans! It is absolutely impossible to make any (proper scientific) linkage between global average temp and “standard of living” of the planet (even if you could predict temp +/- 0.5C, which of course you can not).
Indeed, in spite of the UN/IPCC et al insisting that only “bad scenarios” are possible in the future, looking at the last 100 or so years, there has been around 0.7 – 1.0C warming, and it has been by very far the single greatest expansion of planet’s/humans’ standard of living in all of man’s history. So if this is what global warming brings, then clearly we should be praying for global warming … of course, that is not proper science, but surely the past suggest that it is at least possible (c.f. the UN/IPCC complete denial of even the possibility of such).
… of course it is possible that Bas had different intentions with his remarks, in which case you are welcome to “switch to ignore” 🙂
Clive,
As always, the way to look at it is by what you are implicitly doing to the cells without data. In making some average, when you leave items out, you are assigning them the average of the remainder. So it depends what “remainder” means.
1. Case 1. Empty cells are assigned the global average for that month.
2. Case 2. Empty cells are assigned the global average for the year. The set of empty cells may be different, depending on whether ou rule out cells with missing months. If you don’t, but average omitting those months, you are assigning them the average value for that cell for the year.
3. Case 3. When you average by hemisphere, you assign to each missing cell the hemisphere average rather than the global. This has a significant effect, because there have been historically more missing cells in the SH, and the SH has behaved differently. So compared with (1), this gives a bigger hemisphere influence.
What you should really do is replace all missing data with your best estiamte. None of these qualify, which is why you get variation.
I’ve written a lot about averaging at Moyhu. This is typical, and has links to more.
Nick,
Thanks – I think you have explained the problem! It also explains why GL=(NH+SH)/2 or GL=(2*NH+SH)/3 give larger differences when coverage in the SH was poor.
However I don’t think there is any fair way round the problem. Interpolating or inferring values into cells without data looks a bit suspect, although I know that in effect that is what Berkeley Earth and your least squares methods do.
Clive,
Inferring from sample values is the basis of any continuum calculation, in any field. You can only measure at finitely many points; if you want a global average, you have to infer the rest. Everywhere. The fact that it shows up here as inferring for empty cells is just an artefact of those mechanics. You do the same when you assign the cell a temperature based on points within the cell. That is inference based on finite observations, and also has an error range.
That’s why I push this viewpoint that you should infill explicitly, and do the best possible infilling. Because you have to infill. Just leaving cells out and thinking that fixes the problem leads to the situation you describe here. The system does it for you, and not optimally.
I’ve written recently on the error involved here. I think there are better ways than gridding, but I tried a fancy way of infilling here, which seems to work well.