April H4.5-ST = 0.80C

The average anomaly for April 2017 of HADCRUT4.5  using Spherical Triangulation (ST) is 0.80C. This is 0.06C higher than the standard 5 degree gridded result (0.74C).

Comparison of both averaging methods for the last 20 years. Yellow shading shows the difference between the two.

It is the years which have high Arctic anomalies which differ the most. Here is the spatial dependence for April which is still showing net Arctic ‘warmth’.

Results of spherical triangulation method for April 2017

Finally here is a comparison between H4.5-ST and GHCNV3-ST.

The main difference is the sampling pool of land stations.

This entry was posted in AGW, Climate Change, climate science, Science and tagged , . Bookmark the permalink.

9 Responses to April H4.5-ST = 0.80C

  1. erl happ says:

    Mapping and describing change is one thing (and accurate measurement an absolute requirement) but accounting for it is another.

    Arctic and mid latitude temperature anomalies are a function of the circulation of the air. Those who study climate devised the Arctic Oscillation Index to map the relationship between surface atmospheric pressure in the mid latitudes (that varies little) and surface pressure at 50-90° North that determines the flow of the air in mid and high latitudes. Pressure relationships are further conditioned by the distribution of land and sea.

    That said, surface pressure is mainly a function of the ozone content of the air above 400 hPa as noted by Dobson and other observers more than a hundred years ago. It is change in the ozone content of the stratosphere, naturally variable, that drives the surface pressure relationship that determines the flow of the air.

    More recently, climatologists refer to the ‘annular modes’ in describing these evolving pressure relationships. These modes are recognised as the prime modes of natural climate variation on inter annual and decadal time scales. More than likely, given the pattern in the data that you have mapped in the second figure, the annular modes are responsible for the change that is seen over the period for which we have data.

    The determination to interpret the data as warming caused by man represents an ideological commitment that flies in the face of reason.

    Resources are scarce and living standards of the bulk of humanity still poor. The current focus on climate is both immensely wasteful and a diabolical distraction from matters that are of real concern….employment and the distribution of income that is responsible for the lack of it.The left of politics has gone loopy. Fortunately, those in the middle, who are the swinging voters now realise this.

    • Clive Best says:

      I am sure that the Arctic Oscillation does indeed change Arctic temperatures on monthly and longer time scales, especially in winter. I found a correlation with strong spring tides.

      How does ozone affect surface pressure? In the stratosphere it absorbs UV and it is also a greenhouse gas but don’t see why it changes surface pressure.

      I agree with your comments about climate hysteria overwhelming more pressing needs. We are heading for a perfect storm of conflict if we restrict the development chances of a growing mass of humanity in the developing world. There will be a growing flood of migration to Europe which may result in a repeat of the 1930s.

      This is a far greater danger than a 1-2 degree rise in temperature mainly in cold regions. Equatorial regions will warm hardly at all. They just increase heat circulation to high latitudes.

  2. DrO says:

    You have stated that your are using ST for HadCRUT. HadCRUT is a combination of CRUTEM for land, and HadSST for seas (which itself is mysteriously converted set from ICOADS).

    So:

    1) Just to be clear, based on the “globe image’s description” above (which claims the entire globe is ST’d), is your image of the globe’s triangulation above, ST for the entire globe, or just for the land based instruments?

    2) Arctic (and generally polar) region(s) is/are notoriously poorly instrumented and even where there is instrumentation, the reliability of the histories are in some question. Therefore, even a tiny instrument error will exaggerate the error using such giant ST’s (i.e. a few dubious temps get much higher weighting in the GT average). In fact, the record show there may well be giant errors, and now those are disproportionately weighted by giant triangle areas and (doubly) very serious put into doubt the veracity/value of polar results for example along lat 87.5, there are only 7 instruments for a 5×5 grid (i.e. 7 where at 45 least should be), and one of those seven is showing an anomaly of 12.4C while remaining six are 2’ish … as I had shown to you in a somewhat more detailed document by email.

    3) Have you tried using interpolation other than linear? If you are going to rely on “black boxes”, why not at least run Renka’s SSRFPack, which immediately permits three different interpolation methods for ST’s, including a tension spline method, so you are not restricted to linear.

    Not sure If I remember correctly, but SSRFPack should be freely available in C, Fortran, and MatLab … need to check.

    4) Have you tried experimenting with inclusion exclusion of selected points, altering the grid orientation, etc to test for numerical stability/convergence? etc etc

    5) Any reason for, yet again, excluding satellite data, and which show very different results?

    • Clive Best says:

      1. Triangulation is done for both Land and Ocean data. However the Ocean data (HadSST3) is only available on a 5 deg. (lat,lon) grid, so all I do is triangulate those values. In order for the triangulation to work I have to add a tiny random number to each grid point, since otherwise they are parallel lines. The reason given for why they are gridded is floating buoys are moving measurements so can appear and disappear in any grid cell during one month. If I had access to the underlying data then I could probably triangulate that.

      2. This is right and results in the difference between different datasets.Hadcrut4 only uses fixed 5 degree cells with data and ignores the north pole because there are no measurements. GISS, Berkely and Cowtan and Way all use some sort of interpolation to cover the poles. I don’t use any interpolation but triangulate across the poles. This is only possible in 3D, because lat,lon has a singularity at both poles.

      3. All I do right now is to take the average of the 3 vertices of each triangle and assign that to the enclosed triangle. Any other interpolation will make assumptions that subdivide the triangle into incremental parts and I don’t want to do that. Hadcrut4 is at least the most honest because it doesn’t make any assumptions. Nor does spherical triangulation either.

      4. No I haven’t. The grid is defined only by the location of stations each month. There is no pre-averaging or anything like that.

      5. I have nothing against satellite data except that I don’t understand it. The surface data seems to attract the most attention so that is why I am looking at that.

  3. DrO says:

    Clive this in reply to your Clive Best says: May 30, 2017 at 7:15 am “reply” to my comments. I have elected not to use the “direct reply” since nested replies become narrower and narrower indented columns, and in my view, harder to read.

    Did you not get my email I sent you on May 21?????????

    Virtually every single point in your reply is addressed/defeated (indeed crushed) in the 36 pages therein.

    If you have not got it, I will resend it, as your Posts are going “wildly off the reservation”.

    Amongst other things:

    I had sent you images, and examples of the “source” data for HadSST3, and triangulations etc for the source data, straight form the “source” (i.e. ICOADS). Of course, the actual data looks NOTHING like the fantasy land HadSST3 5×5. I had provided not only the webpage/URL’s etc for those sources, but also illustrated how you can connect directly to the “on line” NDBC, ICOADS, NASA, etc netCDF databases with any competent netCDF “viewer”.

    I think the “higher end” netCDF “viewers” have far too many “bells & whistles” and are likely to cause you some grief. I recommend NASA’s viewer, called Panoply (you can dload that for free from NASA., just Google for it). It much simpler compared to some of the “all seeing/all dancing” ones, it’s not perfect but seems to give just right balance of sophistication and ease of use for the types of issues here.

    I had included explicit and detailed illustrations of dubious averaging in the HadCRUT and HadSST3 datasets.

    I don’t see how you can make the remarks you do, when clearly underlying data is available, and utterly crushes the notion of a beautiful fantasy land 5×5 SST … and which you appear to take as “God given”, without actually checking the integrity of the data (i.e. yet another occasion of breaching Science Rule 2).

    Although a point for later/below, clearly, you don’t seem to understand surface based instrumentation either, yet you include that, but not satellites …. highly suspicious, see below Re your Point 5).

    Also in the May 21 email I demonstrate and prove mathematically (as you could obtain also from any elementary spherical geometry reference) that:

    IT MAKES NO DIFFERENCE for GAT whether you use spherical triangles or not. ANY Polygon/polytope etc approach, e.g. flat triangles, would give the exact same GAT (when you use only linear interpolation as you do), so long as you use them consistently. For example, one could model the earth as cube or tetrahedrons, and obtain exactly the same GAT as the “matching” centroid averaged ST approach.

    That would avoid the colossal KLUDGE you employ, but do not disclose nor prove. In particular, since the vast majority of triangles in your image above CANNOT POSSIBLY be spherical triangles as they are presented, you cannot claim you use ST. Your comment that “I have to add a tiny random number to each grid point” is utter nonsense:

    a) Adding random numbers NO more helps you with your NOT-ST, compared to not adding random numbers. If you don’t understand why that is, you should at least learn a little about elementary spherical geometry.

    … here is virtually the most basic first lesson in spherical geometry: Any two points on the sphere will support a Great Circle (GC) … clearly adding tiny random numbers make no difference whatsoever in the context of the existence of GC’s.

    By definition, all ST’s have sides that are arc’s of GC’s.

    Possibly the third or fourth lesson for elementary spherical geometry would teach this: Your ST area calculator (presumably l’Hullier’s method, or some other facet of the cosine rule) does NOT GIVE A DAMN about what you imagine those grid points to mean. It does NOT KNOW ABOUT parallel lines. It just take two points at a time to calculate (great circle) arc lengths (or the cosine rule equivalent). That is, regardless of the grid points you give it, it ALWAYS treats a collection of three grid points as each pair lying on GC’s, whether they do or not in your mind, or the actual intended use of the data.

    That is, give the ST area calculator any three points, and it will calculate an ST area … but which is NOT the area of the vast majority of the triangles in your image.

    Adding random numbers to the grid points is not just nonsense, it is fooling yourself into thinking that you have done something valuable, when that cannot be true.

    If by some miracle (outside of the realm of mathematics) random numbers would actually “help” in some sense, you would be obliged to prove/show the impact on the results.

    b) If you had performed a comparison of ACTUAL 5×5 “impossible to be ST’s” against the true ST’s implied by the same grid points (random numbers are a red herring), you would find a substantial distortion. Just the error in that may well exceed any difference created by your claimed to be superior methodology. I think you owe it yourself to prove this on your own, as that process will force you learn at least a little spherical geometry.

    … I think you should also get into the habit of proving matters a prior, rather than simply imagining something to be true.

    Regarding your Point 2): I don’t know what it is going to take, but ONCE AND FOR ALL, you ARE INTERPOLATING, whether you KNOW IT OR NOT, or whether you UNDERSTAND IT OR NOT. I have proven this several times already. Also, the May 21 email provides several different algebraic approaches proving the same thing. Perhaps at least one of those super simple mathematical derivations will finally teach you a sufficient basis in mathematics to understand this.

    Your are using linear interpolation, or if you like, your methodology amounts to exactly the same thing as explicit linear interpolation, whether you know it or not. Full Stop!

    Your statements about the “singularity at the poles” is yet another colossal red herring/lack of understanding. You can run your 5×5 grid all the way to the poles, and it would be perfectly OK, though the last ring of “tiles” would degenerate from trapezoidal to triangular. CLEARLY, you can still calculate their areas and thus their weighting/contribution to GAT.

    In fact, with your “centroid averaging” approach, including the poles is trivially simple, and makes no difference whether you use ST or various other schemes.

    In any case, for those (other) situation where singularities arise, one can always make change of variables, or use other mathematical devices to obtain GAT and obviate any singularity, since a singularity, if it exists, is simply a manifestation of an arbitrary choice of coordinate system … in a sense, it is not a “proper singularity”.

    Regarding your Point 3) This is A SERIOUS BIT OF RUBBISH! HadCRUT makes colossal ASSUMPTIONS, and many of them seriously dubious/dishonest. I had demonstrated this in considerable detail in the May 21 email I sent you.

    Even if you have not bothered with the May 21 detailed explanation, you are in VERY SERIOUS breach of SCIENCE RULE 2: verify your data! Something I have been repeating and emphasising ad nauseam. Had/Met, on their site state that HadSST3 comes from ICOADS. A look at ICOADS and its constituents, such as NDBC, etc. immediately proves, on even a cursory examination, that there is HUGE manipulation/fitting/assumptions in HadSST3, which represents 70% of the HadCRUT results.

    Also, the last sentence in your Point 3) demonstrates a catastrophic lack of understanding of the meaning of spherical triangulation and how you use it for GAT. While the pure mathematical side of ST may not make any “physical” assumptions, a MAJOR PROBLEM is GARBAGE IN/GARBAGE OUT. That is, even if ST made no “physical” assumptions, your reliance on that to come to conclude, therefore, the ST results must be “valid” in some sense … well, that is a catastrophic failure in understanding.

    Laundering dirty/crime based money (crap data) through a legitimate bank (ST) does not actually validate the money, and certainly does not legitimise the original crime.

    Honestly Clive, what are you up to?

    In any case, I don’t understand why you would have objection to running freely available packages the permit you to test your linear interpolation/vertex averaging against different types of fitting. I urge you to try SSRFPack and compare to at least the tension spline method therein … it’s free.

    Incidentally, even if you are going avoid any other methodology and continue with limited use of IDL, I can’t remember for sure which one(s), but one or more of the IDL packages’ TRIANGULATE “black boxes” are actually based on 1,980’s “technology”. As with SSRFPack, there is also a free and well tested widely used packaged called STriPack for spherical triangulation, and which is not older than 1995, though I believe some versions freely available have been updated around 2003/2008.

    Regarding your Point 4) UTTER NONSENSE, the SST grid is NOT defined by stations. There are huge vast tracts of ocean without stations (some are 15×15 and 30×30 “empty” grids). Many places that have stations are stations that don’t work at all, or that send rubbish data. There are many other critical problems. For example, drifting buoys tend to concentrate in currents, and thus will never enter a vast number of grid areas, etc etc … its all in the May 21 email. Therefore, to get a 5×5 grid, there must be NECESSARILY a huge amount of messing about.

    Even on a 1×1 grid (as ICOADS provides), stations that actually work, often only send one or two readings per month … hmmm, what sort of average is that? … and many other serious “averaging” and “data existence” problems as illustrated in the May 21 email.

    There are vast number of other deep problems, such as HadSST3 is based entirely on water temp, not air temp (as the land based instruments record). Amalgamating two profoundly different types of data as if they were “the same” is, in the absence of proof, a catastrophic problem. The manner in which water temps change is almost surely a profoundly different process compared to air temps over land, for example all the many different types of currents and “oscillations”, many other complications such as much “phase change” in water (evaporation, freezing/melting etc) . etc etc etc …

    However, your remark completely misses the crucial point. If you are going claim that your linear interpolation/ST process is the “end all to be all” or “better/worse in any way”, then, in science and mathematics, your REQUIRED to prove it. A standard required set of analyses in numerical methods includes convergence/stability/error analyses of the methodology. My question was in connection with some proof that your method at least qualifies in some standard numerical methods sense as valid or tolerable.

    Notice, even if your results are close to some other results, that in isolation in NO WAY proves any of the results (in math speak, neither necessary or sufficient). The other results may be rubbish, your results may be rubbish, etc. Until you can prove “something”, you have “nothing” … and we are back Gödel.

    To use numerical methods, you need to learn a bit about the rules and tools.

    You can’t simply claim something is valid science/ just because you imagine it to be.

    Regarding your Point 5) In a sense this is the WORST of it all. This is a clear demonstration that you are neither a scientist nor use scientific methods, and that your methodology is deeply driven by anti-scientific thinking.

    a) Science forbids arriving at a scientific truth by consensus. In a sense, science is the antithesis of democracy. Anyone relying in any way whatsoever on some manner of consensus (or that it has broader attention) to pretend there to be a scientific truth is at the very least not scientist, or, perhaps something much worse.

    The reason surface based receives more attention then satellite is precisely due to manipulation and excuses to promote a political agenda, as might be the case here.

    When there are competing/contradictory results amongst various methods, and most especially in such a completely politically hijacked subject, your excuse for omitting crucial maters that may well contradict your “claims” does not hold (scientific) water.

    When even the self admitted confidence intervals/error bars provided by HadCRUT amount to a “warming error” almost the same size as the entire warming of the 20th century, how can you make such “bold” claims and pretend that your results are valid or prove otherwise results that are otherwise contradicted.

    In any case, and as it has been emphasised repeatedly previously, if you are going to omit crucial facts/data, particular those that may contradict your results etc, you should AT THE VERY LEAST, acknowledge this, and make a clear statement to avoid the impression that you are trying mislead the unwary reader.

    b) Re excluding something because you don’t understand it: That is not permitted in science, you can’t stick you head in the sand (at least not in science). In any case, you clearly don’t understand the surface data either (indeed you admit you don’t even know where it comes from), yet you include that, but not satellites… seems hypocritical to me.

    As a matter of interest, and you should really learn a bit about this on your own, the satellite instrumentation has resolution almost down to a cubic centimetre. At that resolution, with many satellites, with a near continuum of readings everywhere and also over the poles, no issues mixing water temp with air temp, no issues due to currents, etc etc, there is virtually no need for any kind of fancy spherical anything or fancy interpolation anything, or any of the huge and frequent “administrative adjustments” in the surface data etc etc ….

    I appreciate that would obviate the unnecessarily expensive ST etc methods, and in a sense put your emotionally connected ST method “out of business”, but still don’t you think that is worthy of at least some scientific consideration?

    Finally, please read the May 21 email, and if you have already, and still insist on what you present here and earlier, then, I think we are done, since that would convincingly establish an anti-science process!

  4. Clive Best says:

    Sorry I hadn’t seen your email until now – seems to be a problem with my gmail.

    You misunderstand what I am doing. GAT and changes in its spatial distribution are basically the only measurements that exist to support the theory of ‘dangerous’ global warming. In that sense it is meaningful, because so much depends on it, in particular the world’s economy. Therefore it is important to check the published results and to look at the methods that are used. The land data are based on individual weather stations around the world. It is highly unlikely that there is a global conspiracy to fiddle all these measurements such that warming results. Corrections have been made to older data which can look suspicious, but even using the uncorrected data some warming still results – although about 0.02C less than the corrected data (for land)

    Blue is uncorrected and red is corrected (NCDC) temperatures for land.

    Of course there is ‘interpolation’ in any gridding scheme. Lat, Lon squares interpolate one average temperature to areas of thousands of square miles of varying altitudes. Grid sizes vary proportionally to cos(lat). Triangulation interpolates averages to all areas within one triangle. Triangle sizes depend on the density of weather stations. The poles have few measurements so there the triangles are larger. The main difference though is that spherical triangulation covers 100% of the surface of the earth, whereas lat,lon grids cover something like 85%.

    The Oceans cover nearly 70% of the earth’s surface so therefore dominate the global average. As you correctly say triangulation doesn’t help because HADSST3 has already processed the buoy data to a fixed grid. It only helps by pinning down the tringulation of coastal land stations to the ocean data. So yes ideally we should work with the Icoads individual floating buoy data. However, these are temperature measurements not ‘anomalies’ so somehow we would have to calculate 30 year normal seasonal temperatures for each possible ocean location. The problem is that unlike land stations, buoys are continuously moving around. So far as I know the only groups who have done this work are Hadley and ERSST, and I am not familiar with how they do it either. Probably someone independent should undertake this work as well! Sounds like you could do this.

    Whether AGW is an urgent problem or a benign problem has now become more of a political rather than a scientific issue. Whether there are today any actual solutions for ‘mitigation’ , should be an engineering problem but instead has also become a political problem.

  5. DrO says:

    Actually, I understand exactly what you are doing, at least up until the last series of posts, where there is a chance of some worrying change in direction.

    1) One problem is that your understanding of math and physicist is horribly bad. I don’t mind that too much and I don’t mind nudging you in the right direction … so long as your “heart” and “integrity” is in the right place. It seems lately your “heart” may be succumbing to the “dark side”, should that happen, I have no interest in assisting you in any way.

    2) A second problem is that either you do not have understanding of, or have abandoned the scientific method. Data first, then Check data, check if data will even help, etc etc.

    Once again, heat is not temperature, mixing profoundly different types of data is not allowed unless you proven validity, using any data pre 1,980 is nothing short of ridiculous, etc etc. There is no way on earth anything appreciable before then could have anything other than massive error bars, to the extend of completely delegitimizing the entire UN/IPCC “dangerous” fear mongering.

    Even with the current/latest HadCRUT error bars, one could go sixty-eighty years in a pause, while HadCRUT could show “warming” of 0.8C simultaneously, and either or both of those COULD BE JUST AS CORRECT/JUST AS WRONG.

    Again, emphasising that heat /= temperature: the pedestrian but grossly bad understanding of physics on this point is astonishing. You seem to present what could be utter garbage, without checking the data. Even if you got your ST maths correct, you would simply be “polishing some rubbish” … shiny rubbish is still rubbish. Why not at least start/go examine the REALLY BIG problem of how, where etc the data comes from, and how they MANIPULATE that into their fantasy land 5×5 grid.

    .. .but, and for the last time, while you seem to love ST’s, the exact same GAT would be produced by “flat tiles”. You could use any polytope in place of a sphere. Incidentally, if at least you moved to spherical trapezoids, the vast majority of your ST errors and gaffs would disappear … but flat trapezoids would provide the same GAT, and which are even easier.

    And we haven’t gotten to the more difficult bits yet, like thermodynamics, quantum mechanics, transport phenomenon, etc (e.g. your understanding of radiation is clearly in need of substantial assistance, the necessity for you to move your thinking in terms of an unstable steady state is critical, etc.).

    Averaging the average of the average ad absurdum, and similar “just keep smoothening it to death” particular in a heat balance driven by quartic forces, is just plain wrong.

    Until some in roads are made on these issues, I can’t see how you could possibly “check” anybody’s’ work or claim of “dangerous” this that or the other thing.

    3) Crucially, you also interleave policy with science in a careless manner to arrive at ridiculous fantasies. An easy, and required, solution to that is to stick to the scientific method, and apply in the context of the problem at hand. For example, if the IPCC claims that +2C over 100 years will cause the “sky to fall” and the 2-3 W/m^2 radiative forcing will bring the “rapture”.

    For that kind of claim to have any meaning would require a forecast that would predict future T to something at least as tight as +/- 0.2C. Do you think there is any possibility of making a Temp forecast 100 years out +/- 0.2C?

    The real data crushes those “beliefs”

    4) The biggest problem (I hope, since if the next biggest problem exist, I am done with this blog) is that you have completely no understanding of fundamentally unstable phenomenon (e.g. the climate, wars, financial markets, lava lamps, etc)

    So it is very much worse still, since we can prove that climate dynamics follow a chaotic attractor (i.e. fractal properties), there is virtually NO CHANCE WHATSOEVER to make any kind of forecast remotely resembling the needs/context of the “dangerous” UN/IPCC nonsense.

    In any case, since we know nothing about important long scale periodic effects (PDO, Deep Ocean Cycles, magnetic pole movements, tectonic/core etc etc), we would need at least another 70-110 years of “proper” data from now forward to be able make any kind of sensible statement about what may actually be going on.

    Indeed, that you would even say:

    ” GAT and changes in its spatial distribution are basically the only measurements that exist to support the theory of ‘dangerous’ global warming. In that sense it is meaningful, because so much depends on it, in particular the world’s economy. ”

    This demonstrates a colossal misunderstanding of physics, warming, economies, etc. Amongst other things:

    a) The one and only proper measure is a heat balance, full stop! That means surface and satellite spectral analysis, and a huge amount of analysis about periodicities and movements/change in form of energy.

    b) Temp /= heat

    c) Average of an average of average ad absurdum necessarily DESTROYS the critical heat balance equations. Thus, GAT is exact what you should NOT do. What you need to do is perform a proper heat balance, and also incorporate the many complex heat dynamics that nobody knows how to do. Even if they could, the equations seem to have chaotic attractors.

    d) There is NO POSSIBILITY WHATSOEVER to connect AGW with economics, with the exception that the warming in the 20th century coincided with the greatest increase in global standard of living in recorded history. Anybody who claims differently is full of BS, and I will bet my life I could prove every single such claim to be BS … if you have any, let me know, and I will prove why/how they are wrong, or most cases bullocking nonsense.

    5) Finally, while forecasting temp in the UN/IPCC context is guaranteed to be impossible, even if you could do it, connecting temperature to “human welfare” is infinitely more complicated/unlikely.

    As above, the only actual experience we have with global warming is the 20th century … which has brought the single greatest expansion of global standard of leaving and human welfare in recorded history … of course that in its self is no guarantee that will continue in the future, but at the very least we must consider the possibility that the word “dangerous” may need to be replaced with “happy”.

    … of course, the UN/IPCC et al does not permit even the discussion that warming could be a good thing, since they are in the fear mongering/cash businesses, but that is exactly anti-science, where we (scientists) are OBLIGED to consider possibilities.

    Incidentally, have you ACTUALLY READ, the UN/IPCC “solution” to AGW? Their two pronged “mitigation” and “compensation” strategy is rubbish. They don’t actually mean mitigation in any sense we would understand.

    Are you aware the effectively all aspects of the UN/IPCC solution to the GHG matter is to transfer giant piles of cash from the rich countries to the poor countries (both the mitigation and the compensation elements)? Now how in God’ good Earth will transferring huge piles of cash from one country to another make any material impact on carbon footprint? This is identical to “sin taxes” on cigarette and alcohol, and parking tickets. Both of those are scams to raise many billions in (tax) revenues. If you really didn’t want somebody to create a traffic mess by illegal parking, instead of a 300 squidy fine, put them in prison for 12 months … I think all illegal parking would disappear “over night”

    … clearly, they are not worried about carbon foot print, they are worried about cash (really huge amounts of cash).

    Do you have any idea what actually happened at the Paris Accord? If you did, you would see yet another confirmation that the entire issue is one of cash.

    I urge you to examine also what happened at the climate conference in Poland, and the details of the Chinese commitment to “climate change” (it’s only a couple of pages), etc etc.

    The entire “global warming crusade” has NEVER been an “engineering” problem. It has always been a POLITICAL issue from the start. You need to do a great deal more research. You should also read about people like Maurice Strong, how and why the UN created the IPCC, etc etc.

    Most importantly, this matter has always been a politically attempt somehow to be able to claim that the rich countries have ruined the planet for the poor countries, and thus, as in a court of law for “damages”, that the rich countries must pay trillions to the poor countries. This is why CC is centred in CO2, since there is a weak, but sort of possibly plausible story to connect the industrial revolution, to the rich countries via CO2.

    … amongst many other things, H2O is, even by the IPCC’s admission, four times more powerful GHG compared to CO2, and by laboratory tests and in field tests, may be 15 times more powerful. So how come it is only ever about CO2 … bullocks, it is necessary to keep the “industrial revolution/rich country/CO2/damages” story alive.

    Notice: The UN has almost 200 member states. About 150 of those are “poor” countries (in fact many are gangster/warlord countries). Now, the UN says to them, if you vote/agree that CC is made by the rich countries, then you will get a giant pile of cash for free …. hmmm, tricky, I wonder how that vote will go, well you don’t need to wonder, just look at the Paris Accord.

    This is also why they are in such a big hurry. Once the cash is transferred, they are done. The longer we wait, the increasingly greater the evidence against them.

    Incidentally, this is exactly what happened with various political-activism movements in the 60’s, 70’s, 80′., 90’s , during which they varied between r anti-nuke, global warming, then global cooling, then global dimming, then GMO’s, then “globalisation”, etc etc and now this … except now they are much more practised/slick and the cash aspect of CC implies massive new taxes, so almost all (especially left’ish) gov’s want “in on it”.

    ——————————————-

    On the issue of “conspiracy”

    Regarding your use of the word “conspiracy”, I am not sure if you trying to get too clever there … I don’t recommend it. In any case, I have never claimed a “conspiracy”, rather I have reported, repeatedly, on the alterations to databases by the keepers of the databases. Many of those “administrative adjustments” are actually announced/disclosed by the keepers of the database (e.g. NASA, GISS, etc), with typically some manner of “hand waving argument” about “heat island effects”, etc.

    As you don’t even seem to check the data sources, I am not surprised you haven’t bothered to check on these issues either.

    In addition, the email I sent you shows a comparison of HadCRUT3 with HadCRUT4 … what gives? Why the need to change the entire 170 year history?

    Indeed, if you follow Dr Humlum, such as on this page http://www.climate4you.com/GlobalTemperatures.htm, as I have so often suggested, you will see that if you save the ENTIRE monthly releases, rather than just getting the last month and adding it to your collection, virtually every month the entire 170+ years of data sets change … get it, every month the last 170 years is changed, almost every month. Some months, the changes are massive to the entire 170 years. Almost without exception, the changes are to add warming to recent decades, and add cooling to a century ago (i.e. increase the slope as well as the recent temps). These “administrative adjustments” have become much more forceful since 2005, when they really took off.

    … it is not surprising that AA’s really took off in 2005, since by then it was starting to become clear that IPCC models were full of BS … and in grand standard UN/IPCC style, instead of changing their models, they decided to change the data. BTW, you should also read Hansen’s, Schmidt’s, and Jones’ policy on changing data … it is freighting, but more freighting (and bizarre) are all the Mann et al bits … way too weird.

    These changes are made only to the surface data, not the satellite data. The massive divergence between the surface and satellite is worrying, particularly if you plan on basing your entire “dangerous” story on such contradictory and unreliable data.

    Indeed, even if the two sets did not contradicted one another, that you have to perform constant and huge manipulation to the entire 170+ years of surface data every month does rather question how RELIABLE that data is. If it in fact needs such frequent and massive alteration … is that sufficiently reliable to decide the planet’s fate??

    … again, proper science requires proper data integrity testing … it’s hard work to get it right, its only 140 characters to disseminate crap

    PS. I attach two excerpts HadCRUT data sets. One is one I keep, and just update monthly, the other is the current on line data set. NOTICE, not only have they added the Apr numbers, but they have CHANGED EVERY SINGLE number in the history. They do this pretty much every month, and pretty much in the “same direction” almost all of the months.

    …. or would you say that comparing actual data bases and seeing actual changes is a conspiracy?

    From http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/time_series/HadCRUT.4.5.0.0.monthly_ns_avg.txt

    Just a short selection from the last few months

    HadCrut4 “incremental” Manually for as of May 30, 2017

    2016/12 0.593 0.539 0.632 0.559 0.628 0.472 0.714 0.532 0.648 0.457 0.726
    2017/01 0.741 0.697 0.785 0.708 0.775 0.587 0.896 0.687 0.801 0.579 0.908
    2017/02 0.851 0.797 0.896 0.817 0.884 0.734 0.967 0.792 0.910 0.720 0.981
    2017/03 0.876 0.837 0.918 0.844 0.908 0.745 1.006 0.821 0.931 0.733 1.018
    2017/04 0.740 0.700 0.781 0.709 0.771 0.600 0.880 0.683 0.792 0.588 0.889

    HadCrut4 “full” for as of May 30, 2017

    2016/12 0.597 0.545 0.635 0.563 0.632 0.479 0.715 0.537 0.652 0.465 0.728
    2017/01 0.740 0.695 0.784 0.706 0.773 0.586 0.893 0.686 0.799 0.578 0.906
    2017/02 0.842 0.790 0.887 0.809 0.876 0.727 0.958 0.784 0.901 0.714 0.972
    2017/03 0.875 0.837 0.917 0.843 0.908 0.745 1.005 0.820 0.930 0.733 1.017
    2017/04 0.740 0.700 0.781 0.709 0.771 0.600 0.880 0.683 0.792 0.588 0.889

  6. Steven C Crow says:

    Clive, as usual, I like your temperature anomaly maps and believe they suggest a geophysical origin of temperature anomalies. But a question. What reference temperature or temperatures do you use to compute the anomalies? Does the reference temperature vary from triangle to triangle, or is it a single global temperature? Or maybe it is a function of latitude, which would make for an interesting map itself.

    • Clive Best says:

      I always use the average monthly temperatures over the period 1961-1990, calculated for each station. These are the supposed ‘normal’ values. The anomaly for every month is then simply the average temperature minus the normal for that month.

      That is why you need to be careful with interpreting coloured warming maps. Anything before 1960 tends to look ‘cold’ and anything after 1970 tends to look ‘warm’. GISS especially play games with this effect by using a non-linear colour scheme. This makes recent warming look very large. However, at any point on the earth’s surface temperatures change by fractions of a degree month to month. The effect looks larger in the Arctic because temperatures are much lower than elsewhere. So an increase in temperature from -50C to -47C looks catastrophic, whereas it isn’t !

Leave a Reply