Tweaking Global Temperature Data

The “kriging” biases described in the previous post can be mostly avoided by using  ‘spherical’ triangulation. This treats the earth as a sphere and triangulates all measurement points onto the earth’s surface.  In this case vertex angles no longer add up to 180 degrees. The data are then re-gridded onto a regular 2 degree grid coving all latitudes and  longitudes using an inverse distance weighting.  The spatial average of measurements over all latitudes and longitudes is then calculated. The data used are all 7300 station data from GHCN V3C combined with HadSST3 ocean temperature data. Here is a comparison of this new method with all the other data.

Comparison of annual global temperature anomalies normalised to 1961-1990. The new interpolated data labelled CBEST essentially agree with the other extrapolated (kriged) data. Notable differences are that CBEST enhances net el Nino warming peaks in 1998 and 2015. The 2 degree and 5 degree gridding results now nearly lie on top of each other.

Of particular interest is the extrapolation of data near the poles, which is where most warming has been observed. The problem though is that there are very few measurements in that region especially before 1940. Figure 2 shows triangulation grids for both poles in 1880 and 2016.

Triangulation of surface temperature data at both poles for 1880 and 2016. Notice how poor coverage remains in South America, Africa and Australia even in 2016.

In 1880 there are no measurements further south than 70S or further north than 70N, and triangles cover huge distances. Therefore artefacts are likely introduced by krigging into these regions. The 2016 triangulation shows much better coverage but there are still no data inside 75N.  Antarctica does a better job because stations now exist at research stations including one at the south pole.

Here is the full 167 year comparison.

Global Temperature data comparison 1880-2016

My conclusions are that before 1940 it is best not to use any interpolation into unmeasured areas of the world because the coverage is so low, Hadcrut4 methodology is preferable. Recent warming is enhanced by interpolation because those empty lat,lon cells are filled by the influence of ‘warm’ nearby neighbours. Likewise natural warming cycles such as el Nino also get enhanced.

Data  are available here
IDL code that generates this data is here.

Next I will use the triangulation itself of measurement locations  to calculate the global average, avoiding any interpolation.

This entry was posted in Climate Change, NOAA and tagged , . Bookmark the permalink.

15 Responses to Tweaking Global Temperature Data

  1. Nick Stokes says:

    Clive,
    I looked up your code. I haven’t used IDL for about 30 years, and I was quite impressed by what TRIANGULATE could do. But I think it can also do exact integration of the linear interpolation, rather than use gridding. SPHERE will return the 3D cartesian node coords, and triangles contains the node indices of each triangle. If you loop over triangles, for each you can make a 3×3 matrix of coords. You can create a weight vec for nodes by adding the abs(determinant) of that 3×3 to the weight of each of the 3 nodes of that triangle. The det (DETERM) is proportional to triangle area. Then the weighted sum of node values, with those weights (divide by total) is the exact integral of the linear interpolator.

    IDL will probably also get the spherical area, given the nodes. But I don’t think the difference matters.

    • Clive Best says:

      Nick,

      I have just spent the last 24 hours doing exactly that, thanks to your advice! I calculate the centroid and spherically corrected area of each triangle produced by TRIANGULATE. The centroid value is the average of the three vertex values. I then make the monthly global weighted average, based on triangular cells.

      The results are exceptional. I will try and write this up tomorrow.

      cheers

  2. DrO says:

    Clive

    Not trying to be funny or anything, but I think you have gone off the deep-end here.

    I produced a note that illustrates the (many) errors. The “short” version is 7 pages, and relies on quite a few charts and images. I can’t seem to find an image upload tool on your WordPress/Blog. If you could make such available, I could post the corrections/explanations.

    … in the alternative, I could put a link to a PDF or .Doc file somewhere for you to dload and “html’it” on to your blog page.

    … let me know on this blog page, or email DrOli (at) red-three (dot) com

    Cheers

    DrO

  3. Clive Best says:

    Here is the comment from Dr. O in full.

    “Tweaking???” … more like delusional.

    Garbage in – Garbage out: pretty pictures does not science make (you are falling into the same regimen as the IPCC et al in focusing on “(pretty) wrapping paper” rather than on substance, and possibly also now the wide spread “disease” of “black-boxitis”)

    CAVEAT: my previous comments in previous posts regarding the breaking of thermodynamics and heat balances continue to apply, as does the connected notion that temperature, and especially global average temperature is a particularly misleading quantity (if you can’t get the heat balance right, everything thereafter must fail). Previous arguments relating to the non-linear dynamical problems continue to apply (i.e. even if you get the heat balance right, you are, almost surely, still “screwed”). The switch to time-series methods is the last gasp of scoundrels and guarantees dishonesty in the UN/IPCC et al approaches (lies, damn lies, and statistics). And so on, and so on.

    … this is the “short” version of this response. The long version includes a much more detailed treatment of the “Administrative Adjustment problem”, “reality impact” compared to el Nino ONI, model errors, etc.

    First, a couple of “immediate” matters regarding the images presented.

    a) Your claim in the last sentence of para 1:

    “Here is a comparison of this new method with all the other data.”

    can’t be true. You don’t even show earlier HadCRUT, or any NASA, or NCDC versions etc. CRUCIALLY, you don’t show ANY satellite data (which you always seem to insist on omitting). I have added some of those below to demonstrate some of the many bits of (giant) nonsense and dishonesty.

    b) Comparing actual data as released from the sources, as for example can be seen on pages 5 an 6 of (http://www.climate4you.com/Text/Climate4you_January_2017.pdf) for the HadCRUT, GISS, and NCDC releases, there appears to be a serious problem with your charts. Notably, it is very and emphatically clear that the actual values and perhaps more importantly the core “character” of each of those data sets are very different from one another. Whereas your charts have essential all time series within a tiny margin of difference, and ALL of essentially IDENTICAL character (e.g. peaks, volatility, etc). I have no idea how that has come about, and seems very suspicious.

    It is conceivable that all data sets would be converted to be virtually identical if your method is applied to each of them, but that would just further demonstration that something is very wrong (e.g. if you “excessively over average”, then you can get almost any two things to look similar)

    … so what is the point of your presentation? Is it that if you use the same “bent data” with the (mostly) same (nonsensical/circular) approximation methods, then you end up with the same, almost surely, wrong solution? I think you need to read Kurt Gödel’s work, as what you have created here is one way to fall-afoul-of the Incompleteness Theorem.

    At least one of your results must be proven to be “correct” before you can make any comparisons, or draw any conclusions … and that has not happened. Indeed, the comparison should be based on “external foundations” (e.g. get satellite spectra and see which interpolation method most closely approximate that, via, say some Stefan-Boltzmann basis … though, that would be far from perfect too, but at least it would avoid the tremendous circularity). See Section (B) below for a more detailed explanation of the deep and fatal interpolation flaws.

    There are both “big picture” (i.e. Climate modelling) fatal problems here (Section (A) below), and also there fatal problems with just your “little knowledge is a dangerous thing” approach to interpolation/approximation, not withstanding the points above and previous submissions (Section (B) below).

    (A) Some Big Picture Issues

    1) As you routinely insist on using ONLY manipulated land-based data, here are your results (taken from your link) compared to UAH satellite results for Dec 1979 – Feb 2017 (from the “official” pages such as http://www.drroyspencer.com/2017/03/, or as I did here, from http://www.nsstc.uah.edu/data/msu/v6.0beta/tlt/tltglhmam_6.0beta5.txt).

    CAVEAT: In order to “align” to your inappropriate averaging (e.g. unnecessary creation of “man-made/anthropogenic :-)” serial-/auto-correlation etc.) and different reference date for the “departures”, I have used what I believe is sufficiently close to your (erroneous) post-processing (e.g. the “peaks” etc match, but it is the “trend” that is more important).

    Figure 1 a). CBest/HadCRUT vs. UAH 1979 – 2017

    The blue line is the data you supplied via the link on your post. The yellow line is the same data adjusted down to the UAH’s diff reference data/temp (i.e. a straight subtraction of a constant due to 1961-1990 ref vs. 1981-2010 ref). The dashed lines are simple Maximum Likelihood Estimators (MLE’s) allowing up to a (global) cubic polynomial.

    ASIDE: It would have been much better if your data was provided with at least a monthly frequency. That would eliminate a goodly portion of your “man made” serial-correlation, important features would be preserved, and comparison would be easier/more realistic, such as next.

    Dr Humlum at Climat4You does this every month, such as Fig 1 b). That plot is averaging the two “groups” of data (i.e. land- vs. satellite-based groups), so the “divergence” is a little less compared to above, but quite clear still.

    Figure 1 b). Land-based (CRU/GISS/NCDC: red) vs. Satellite UAH/RSS (blue) 1979 – 2017 from Climat4You (page 9 of http://www.climate4you.com/Text/Climate4you_January_2017.pdf). The blue/satellite data here is a combination of UAH + RSS, not exactly Fig 1 a), which is UAH only.

    Figure 1 c). The HadCRUT data releases include (their) 95% error bar estimates, which demonstrate the giant uncertainty just in their instrumentation (typical variance being on the order of 0.4C, which is more than half the warming of the entire 20th century) … and has grave implications for any interpolation/approximation averaging.

    CLEARLY SOMETHING IS VERY WRONG!

    Especially “wrong” is the divergence since about 2000 – 2005! Why are the land-based data and the satellite-based data diverging from one another? It is due to the so-called “Administrative Adjustments” (AA’s) that are applied to the land-based data sets? The “divergence” is now greater than the HadCRUT error boundary.

    To deviate at a rate of approximately 0.15C/15 years implies a (wrong) divergence of about +1.05C/100 years, so just the AA’s provide at least half the IPCC et al’s +2C “sky is falling” scenario (assuming there are not more AA’s in the next 85 years … hmmm). Moreover, those same AA’s have “increased” the warming for the last century or so by about 0.7C, which is about the same size as the entire warming for the 20th century (see Figure 4 in Note 10).

    ASIDE: In fact, the situation may be proven to be very much worse, and possibly inescapably fatal, for HadCRUT/UN/IPCC as the current el Nino “spike” may well be “symmetric” as all previous el Nino’s with ONI > 1 have been (see discussion in my Note 10, to be released mid April 2017). In that case, the “pause” will be so prominent as to give the UN/IPCC et al no place to hide, and the divergence in the data and the MLE’s will be particularly emphatic. We might be in a position of high certainty on this point in as little as 12 months, but much of the symmetry is already observable in Figure 2 (see also below).

    Notice, the AA’s don’t simple increase recent temperatures, they also lower very old (e.g. 1900) temps. This rotates the manipulated data to create incorrect current levels and incorrect rate of change. Moreover, the AA’s also “dampen” the results to show a much less volatile pattern, to give the false impression that it is mostly a straight (up sloping) line with little variance.

    … why is all this happening, and why has there been a large increase in AA’s since around 2005? Indeed, why is there a need to apply a new set of AA’s to the entire 140 year database virtually every single month?

    a) The increases in recent values are required to create “bent data” that better “agrees” with the (rubbish) IPCC et al models (which have been crushed for the last 20 or so years by all the data that has become available since). All of the predictions of 80’s, 90’s, and early 2000’s have been shown to be rubbish due to the so-called “pause” (and other factors, such as spectral data in “GEWEX Radiative Flux Assessment (RFA)” (i.e. http://www.wcrp-climate.org/documents/GEWEX%20RFA-Volume%201-report.pdf)).

    Indeed, the IPCC themselves said (back in the day) that if the models fail in 15 years, it’s all over for the models … well they have been failing for about 20 – 25 years (and possibly much worse, see Note 10).

    Those of us around at the time may recall the “hoopla” and delays with respect to the release of AR5, since it was clear by 2010 that the data crushed the models, and so the IPCC et al had to “back pedal” and come up with a “newly spun” version of the AR5 released about 1 – 2 years later than scheduled to allow time for the cover up.

    b) The (false) rotation is required (for the IPCC et al) to further misrepresent the speed of warming, and to further misrepresent the correlation to CO2.

    c) The inappropriate dampening is required to give the impression that real world temp character is similar to the (incorrect) models’ characteristics (since the models can’t predict volcanoes, el Nino’s, or any important non-monotonic character, etc very well). Dampening also helps to imply a correlation and causality with CO2, where there is none, etc. In addition, the dampening has the added “bonus” to give the false impression that the real world is more certain compared to what it actually is, and thus the (false IPCC et al) charts provide a “better” basis for the UN/IPCC et al lie.

    Notice also, in the alternative, if the AA’s were required for proper science, rather then being an excuse for the (scientific) sacrilege to “bend the data to fit the models”, it would be a show-stopper still. It should be clear that if you need to AA temp data to the tune +1C/100y, then the data is highly unreliable, at least in the UN/IPCC policy context, and may not/cannot be relied upon or used.

    … so what can you say about anything, including krigging, in this light? Indeed, since the IPCC et al claim that much/most of the “sky is falling” warming is due to the (claimed, or AA’d) warming of the Arctic, which is also the least instrumented, and most opportunity for krigging/manipulation. It does all seem rather suspicious (though they don’t provide sufficient information to be able detangle each of the AA’s precisely (e.g. krigging vs. “heat island”, “insulated buckets” vs. “cooling water intake” … , etc) and their relative contribution to the total AA).

    Also, the degree of instrumentation of the Arctic has actually decreased massively in recent decades. Thousands of stations in Canada and Siberia went “off line” or were omitted without explanation. Some stations are included, then excluded, then included etc etc, on what appears to be an arbitrary basis (sounds like more “arbitrary AA” to me).

    … all you can say is that it is giant bit of bullocks, and just a bunch of politico’s masquerading as scientist for (bent) ideological purposes. Pretending that some alternate packaging of the rubbish “removes bias” is complete nonsense and, indeed, dangerous.

    No amount of pretty pictures can change that. You may have found beautiful wrapping paper, but “that stuff” in the box sure smells.

    (B) Interpolation/Approximation Basics (and fatal flaws)

    The basis for your interpolation/approximation methodology is badly flawed.

    1) The point from above, of comparing one unknown/guess to another unknown/guess CAN IN NO WAY say anything about the (scientific) truth of ANY of the plots in your images. Again, go see Gödel.

    2) Spherical Triangulation: It would be nice to obtain absolute precision on what you mean here. Do you mean that the vertices of triangles on a sphere, but each is cell is a “flat tile”, or that each cell is “portion of the sphere tile”, or ???. If the tiles are not flat, and curved purely to reflect the planet’s curvature, then it is an error. Remember, you are interpolating/approximating temperature, and apparently you are not co-krigging (i.e. you are ignoring altitude), and so adding “sphericity” to the tiles is just plain wrong.

    3) Even if you had a proper basis for comparison (e.g. a proper “anchor” to prove/compare different interpolation/approximation assumptions) your approach is still woefully inappropriate and smacks of “a little knowledge is a dangerous thing”. Separate from the many points on this issue already made regarding the tautologically circular and systematically serial-correlation problems, your “new found spherical/centroid” method is exposed to deep and potentially fatal flaws (or at least you have not made any statements on how you got around various fatal traps).

    For example, the “spherical” area adjustment is so tiny in comparison to even just the errors admitted by the MET/CRU that it is hardly worth the trouble (e.g. virtually every single months global ave temp in HadCRUT has a “95% variance range” of about 0.4C. Whereas your claimed “bias reduction” appears to be about a tenth or a hundredth of that.

    However there a more basic and fundamental errors, at least as presented. If you are going to use a Stokes’/Green’s Theorem approach (as your “vertex averaging” is trivially simple subset of that), then you must first take care to avoid various, and likely fatal, problems.

    ASIDE: If what you mean when you say “spherical triangulation” is as per 2) above, please note that for simplicity, I have used “flat” tiles below. Exactly the same points could be made with “spherical” tiles, but would require a more complex discussion and which would dilute the main points unnecessarily.

    a) Grid Orientation Bias: Although there are many types of “grid orientation” issues, I don’t have the time to explain all of them, so I will use just one example. The image below show four vertices, and two different possible “grid orientations”. Clearly, the “centroid/average” temperatures will be different depending on which orientation is used. As such, one is obliged to incorporate additional “machinery” compared to that shown/discussed in your post to deal with such matters.

    b) (Missing) Curvature Bias: The image below shows a schematic of two tiled cells and their interaction with two additional (nearest neighbour) vertices (here, each with a temp higher compared to all the other vertices). This implies that the “correct” profile that connects the “holistic space” is non-linear, and thus the corresponding/common edges of the two tiles must be curved, NOT straight. As such, since assuming vertex averaging/centroid implies “liner interpolation” between the vertices, the vertex/centroid averaging will be wrong/biased. The correct process requires determining the correct (or sufficiently close) curvature of the temp surface (unrelated to “sphere”), and integrating along the tile boundary (i.e. applying Stokes’/Green’s Theorem) to determine the “centroid temp”.

    Indeed, this type of bias will be more obvious with your “raw centroid” approach, compared to the earlier one where you “interpolated” the “raw tiles” into a uniform lon/lat lattice, since the tiles are bigger and less uniform. However, and to be clear, both methods will suffer at least the same (or worse) error, since the “lon/lat lattice” is just expressing/burying the error in the raw tiles inside the (incorrectly) generated lon/lat lattice (and the “naked” conversion may further exacerbate the error).

    c) Continuity Bias: The next bits become very technical very quickly, but to give a flavour to the matter consider this: If the data was, say, the (x,y.z) spatial points recorded by, say, a geologist for some “terrain”, then the interpolation of the data to approximate the implied surface is quite tricky at least for the reasons above.

    However, that is a much simpler problem compared to the case where the Laws of Thermodynamics etc must be preserved, or other factors impose higher order continuity requirements. For example, heat balances or other factors my require not only continuity in the data, but also continuity in, say, the first and second (partial) derivatives of the data (e.g. as may be expected in advection/diffusion processes). Indeed, given the complex contours seen in each every global temperature map at any instant in time, one may reasonably expect much more stringent requirements.

    I will not even begin the discussion of interpolation vs. approximation, since based on your previous posts, you are very far away from being ready for that.

    … and all that must be correct before we tackle the very very much complex (and potential intractable) problem of continuity in time (which all of the krigging and related methods just destroy on the whim of applying an assumed guess with linear prediction, in a highly non-linear and aperiodic world … again, you seem to be very far away from being able to discuss this matters also).

    … in short, applying “black box” calculators (i.e. where you don’t understand the internals) is a dangerous matter. It is most dangerous when it produces “pretty pictures” and which may seem “plausible” in some sense; as then one is more likely to fool themselves into thinking they’ve done something right, when they have not, and they may not even know it.

    • Clive Best says:

      All your points are valid criticisms of trying to encompass all the complexity of climate, non equilibrium thermodynamics etc. into one global temperature. I agree this is naive and wrong, but unfortunately that is all we have to compare to models.

      I am trying to find the method which introduces the least bias on the incomplete of data which we are presented with by CRU/Hadley and NOAA. So this is not really an argument about climate change as such, but about whether and how much global temperatures have changed over the last 165 years. Yes there is an underlying assumption of spatial continuity in temperature

      The models are of course themselves over-simplifications at different levels of complexity. Cloud formation, jet stream dynamics etc. are all ‘parameterised’ in models because they are not really understood. Model paramterisation are then ‘tuned’ so as to fit past warming and atmospheric measurements, before then being extrapolated to ‘project’ future warming.

      Climate change has become such a political and economic issue that I suspect climate scientists may even be scared of their models being proved wrong. They can’t all be right!

      • DrO says:

        Dear Clive

        Cheers for all that. I think you response covers materials much broader compared to the direct krigging issues. As I had indicated at the outset of my response, if is in fact correct to couch the matter in that much broader sense, and, essentially, incorrect to couch the matter in the much narrower confines of “global average temperature” (GAT).

        As noted, by itself, GAT is not particularly meaningful. Indeed, it is rather misleading to use it in any important sense, as continued use of it in relation to climate controversy is just plain wrong.

        You response seems to indicate a position of the sort (if I’ve understood it correctly):

        “while these methods are not very good, we must do something”.

        If indeed that is your position (and it is certainly the position taken by various climate fanatics), then I must put a stop to it immediately.

        Notably, one can provide a reasoned derivation proving that if you cannot make a sufficiently reliable forecast, than you cannot take any decisions. For example, if you come across a man lying in the street, but have no instrumentation of any meaningful sort about his condition (e.g. is he dying, sleeping, shot in the arm, or ???), what is the appropriate action to take. Some will insist that you do something/anything, as “surely doing something is better than doing nothing”.

        Can we agree that is nonsense?

        If you don’t know (reliably) what is going on, then the correct policy is “do nothing”. That is, a kind of generalisation of the Null Hypothesis, to put in “math terms”.

        Any other consideration or discussion of policy, particularly in this politically hijacked subject is down right dangerous. The UN/IPCC are screaming to amputate the man’s legs, on the grounds that we “must do something”, and that they actually know what’s wrong with him, when they cannot possibly know what is wrong with him. Very very dangerous.

        Lending any measure of direct or indirect support to that type of lunacy is also dangerous, and primarily why I have taken exception with some of your posts (though they are also quite wrong in both mathematics and physics … I have proven this in the past, but I provide some more proof here, and some also in my later response to Nick Stokes’ claims, such as they are).

        Just a few points, and most of these have been proven in the past, and I have sent you proper data and mathematics on this in the past, so not sure why you continue as you do. However, you may recall just a few of the many fatal flaws include:

        1) I had sent you the link to the “GEWEX Radiative Flux Assessment (RFA)” (i.e. http://www.wcrp-climate.org/documents/GEWEX%20RFA-Volume%201-report.pdf)) on more than one occasion. Previously, I specifically pointed you to the portion of that doc where the satellite data deals with a “massive mystery” that no one can explain, but for the lack of a better “name” they talk about it in the context of “cloud effects” (though they clearly state they don’t actually know what it is). Moreover, the data shows that this “mystery quantity/cloud effective” has the opposite sign in reality, compared to that used in the GCM’s etc. CRUCIALLY, not only does the real data have the opposite sign compared to the models, but also the net difference is on the order of 20 W/m^2, i.e. one is on the order of +10W/m^2 the other is around -10 W/m^2).

        Given that entire “trouser soiling/sky is falling” UN/IPCC GCM nonsense revolves around a “radiative of forcing” of 2 – 3 W/m^2 … the correct process should have been immediately dismissal of the GCM’s, and the entire UN/IPCC hysteria, as clearly the models and their “projections” are millions of miles from reality (and have the wrong sign).

        2) Your reference to the model parameters being “tuned” to past warming is an extremely dangerous endorsement of an extremely dishonest methodology employed by the UN/IPCC et al.

        a) The “tuning”, in my part of the universe, when applied to models composed of systems of PDE’s, is officially referred to a as “parameter estimation”, but is essentially a very fancy version of “curve fitting”. That “tuning” is ONLY preformed on a very short a (too) carefully selected portion of the past (the 20 century of the AR4, and about 120’ish years for the AR5) … though that must be utter nonsense, even on your own evidence, as any “global” data prior to about 1950 is almost surely meaningless (in the IPCC/UN context).

        If they “tuned” to, say, the 18th century, their “projections” would be profoundly different, not only to 2100, but completely wrong when compared to the “known” results for just the 19th century … etc etc.

        This is ALWAYS the LIE with statistical methods/models. You can embed any trend and variance you like by simply messing around with the sample set. The more recent transition away from PDE/conservation based methods to time-series based methods is an “amplification” of exactly that method of lying.

        b) The “projections” they actually make, are NOT based on the “tuned” models! So your statement is outright false in this respect. As proven to you in the past, and amongst many other critical problems, NONE of the AR4/AR5 “projections” includes volcanoes, but the “tuning does” … very naughty. Also, the “tuning” to the 20th century includes “tuning” to remnant particles in the atmosphere. HOWEVER, the “projections” are “calibrate” to the so-called “scrubbed sky” … and the list goes on and on. Each and every one of these “tricks” in the “projections” create “projections” profoundly out of step with the “tuned” models, even if the “tuning” was not the lie that it is.

        … incidentally, this is exactly the same “shell game” Schmidt tried to use to in response to the clear proof the failure of the models. I wonder how many people even noticed that Schmidt’s citation of the “tuning” has nothing to do with the projections (apples v oranges).

        c) In addition to the UN/IPCC abuse the language of mathematics and physics, they have been “inventing” an entirely “new/special” language just for themselves. The word “projection” is one of the sneakiest examples of this trickery. Notice that they stopped using the word “prediction” in place of a very (too) carefully engineered definition of “projection” solely and purely to allow that a solicitors’ trick to pretend that they have “predictions”, when they don’t.

        Having all these solicitors’ trick as the basis for “their language” is extremely telling and worrying.

        3) As proven previously, IT IS IMPOSSIBLE to predict the climate in any practical manner in the UN/IPCC context. This is not because we don’t have enough math geeks or supercomputers, but rather a property of the fabric of the cosmos. The climate is what at my end of mathematical modelling is called a “fundamentally unstable phenomenon”. This the group of phenomena where the tools of non-linear dynamics are required (which encompasses fractals, chaos, etc).

        If you wish to do the analysis, you may wish to begin with calculating at least the easy bits, like correlation dimension, Lyapunov exponents, etc, but really eventually one must parameterise the (fractal) state space attractors/repellers etc.

        Since it is much too complicated and time consuming to teach you that in a blog, at least calculate, say, power spectrum. This is relatively easy, since it is just a variation on a Fourier transform, which you can apply in the same “black box” manner as you did your triangulation.

        As a basic rule, if the power spectrum is continuous or near continuous, you may likely have an aperiodic problem.

        … in that case, it is likely ALL OVER for the models, since even if the models were “perfect” the practical forecast horizon would be much too short, and thus the UN/IPCC et al can “go home”.

        … of course, it is not as simple as mere power spectrum, and in any case, there is not event tiniest fraction of data anywhere on the planet to perform this type of analysis properly … again, way too technical for this blog.

        I have performed various such analyses, and I can tell you (to the extent that the meagre data permits), it is well and truly a fractal/aperiodic matter, and regardless of the “perfection” of your models, it is a complete “SHOW STOPPER” for climate prediction

        … there is NO POSSIBILITY of any practical climate forecasts of the sort the UN/IPCC have in mind, and thus no possibility of “climate policy”.

        4) VERY MUCH WORSE, and very much MORE FATAL, is that even if you “krig” your way to a perfect forecast of the GAT (which the universe says you can not), it would still be UTTERLY MEANINGLESS in the UN/IPCC sense, since there is NO possibility to link GAT to human welfare.

        In math terms, the GAT prediction is, say, a “countable infinity difficult problem”, but the “human welfare prediction” is an “uncountable infinity difficult problem”.

        Indeed, the only evidence of any sort that we have is the 20th century. Since the 20th century saw a GAT increase of around +0.7C warming, and since the planet’s population experienced the single greatest expansion of welfare/standard of leaving in recorded history … perhaps one would wish to go with that “trend” as a “predictor”, rather than the UN/IPCC’s “only Hell in bread basked possible future”.

        Though that is at least some actual correlation, clearly that is not conclusive science either.

        5) Amongst all this krigging, GCM’ing etc etc, have you noticed (have you actually read) that the ALL of the UN/IPCC strategies/policies ALL rely on a TRANSFER a of (a huge amount of) CASH to “solve” the “sky is falling global warming problem”.

        If you honestly believed that carbon footprint would destroy life as we know it, and the Indians and Chinese are each doubling their carbon footprint every 10-15y years or so, while the developed countries are screwing their populations with 30c/kW “green” electricity, where otherwise 3 – 4 c/kW traditional is available … would your “two pronged” solution strategy be about “taxing the rich”?

        … if somebody parks in rush hour for self serving purposes and causes grid lock and inconvenience for thousand of motorist, then is the correct solution a 300 pound (or dollar, etc) ticket (and they can afford it)? No, that is just scam for the city to generate revenue. If you really wish to stop illegal parking, apply a sentence of, say, 12 months in prison. I am pretty sure no one would EVER park illegally again.

        … put differently, the models etc are rubbish, they have nothing to do with anything, except a lot of hand waiving to posture for a massive redistribution of wealth (i.e. a giant step to the left).

        Finally, and as I have pleaded with your so many times in the past. There must be a way you could present your efforts that clearly decries that disconnect with the UN/IPCC/AR etc nonsense. If you wish to krig, then krig, but please do not do it any way that lends credence of the UN/IPCC nonsense (which is fuelled to a large extent by Had/CRU, NASA/GISS, and NCDC … i.e. all the chief culprits in data manipulation and lying with “models”).

    • Nick Stokes says:

      There is much to disagree with on the more general matters, but I’ll stick to what is relevant to this post:
      “The basis for your interpolation/approximation methodology is badly flawed.”
      No, it’s not. Dealing with the points:
      1) Gödel etc is just tendentious. The approach used is perfectly orthodox.
      2) Clearly the nodes lie on the surface. One could fuss about whether the intervening surface is on the sphere, but as DrO says himself later, the difference would be tiny. These triangles are small relative to radius.
      3) No fatal flaws there. In fact, FEM has a simple way of dealing with these issues. You write a mapping function that takes the flat triangle onto the sphere surface. Its derivative matrix would be very close to I. The integral on the surface would be that on the flat, multiplied by the determinant of that mapping. Virtually identical.
      a) Grid Orientation Bias. There is no bias. As I’ve mentioned, the weight for each node is the area of all the triangles it touches. In the case illustrated, the nodes at the end of the diagonal get the full weight of both triangles, the others get just one. But that isn’t a bias. In any case, if the mesh is Delaunay, as IDL says, and as I would expect, the matter is determined. The diagonal must join the nodes with the greater angle sum. Again, a little trickier if they are not coplanar, but the deviation is small.
      b) I’m not sure what the point argued here is. What Clive is doing is orthodox FEM, where the approx function is C? – ie continuous with discontinuous derivative. And
      c) that is perfectly consistent with thermodynamics and is used as such all the time. Fluxes are matched on the boundary. C? elements are fine for second order PDE.

      • DrO says:

        Dear Mr Nick Stokes:

        Before I respond, I am obliged to emphasise to the audience that my citation of Stokes Theorem has absolutely no relation to you (Nick Stokes). Rather, I am citing a very great 19th century mathematician: George Stokes. He made, amongst other things, great insights into vector calculus, and what at my end of the universe is called differential topology. Very clearly, that is not you.

        Also, I must begin by noting that I had presented a comprehensive and data based position, that clearly supported and proved the points. Your response seems to be devoid of any counter data/fact, but just a lot of “hand waving arguments” and some “big words”.

        For the recorded, Mr Stokes, I have been performing mathematical modelling for more than 45 years. Most of that has been in commercial/industrial settings, where if you get it wrong, you don’t get paid, if you get badly wrong, you loose your job, and if you get horribly wrong, you could end up in prison. This is a profoundly different environment, and palpable greater motivation to get it right, compared to academia or the vast bulk of so-called “climate science” (where anybody can publish anything).

        … I am quite certain that my 2nd and 3rd year undergraduates could tear you comments to pieces, but lets try the gentle version first. Indeed, in spite of your penchant for big words and “apparent” theories, I think small words might be more appropriate:

        Your point 1) Re your completely unsupported comments Re Gödel: You are of course flatly wrong. The presentation I provided clear demonstrates a breach of Gödel at least at two levels.

        a) Clive’s position, so far as I can tell over these various “krigging” posts is: “when we interpolate a different way, we get a better answer”. But on what basis can you say one answer is better compared to another? You cannot, at least not with the information provided. As such, you have an “axiomatic” system. You simple assumed one is better than the other. Gödel proves that one cannot establish any “truths” inside an axiomatic system from within.

        Even worse, the land based data showed “error bars” of 0.4C 95% variance, whereas the difference between Clive’s various tests showed, by contrast, tiny difference and well inside the error bars. This too rather questions the veracity of one krig over another krig.

        b) But my proof went further. Clive’s krigging etc completely ignores all of the satellite data. So now we have another contender for the truth. Indeed, as the satellite data is very clearly diverging from the “krigged-land” results, one is now in a real bind to prove what if any benefit any fitting/interpolation exercise would bring via krigging.

        … indeed, as I indicate earlier, is it possible the bulk of the “bending/manipulation of the land-data” by the IPCC et al is affected through some (manipulated) kigging.

        In short, even if Global Average Temperature (GAT) was meaningful in any important way, one could not make any conclusive statements regarding which krigging is actually better, at least not without additional (e.g. external) data. The existence of the satellite data very much exacerbates the entire (land) krigging circularity.

        c) Personally, I don’t see how anybody actually reading these post and assessing the data could miss that, but then perhaps you may wish to review the meaning of “tendentious”, though I think smaller words might be in order.

        Clive’s approach is not “orthodox”, or at least not orthodox to answer the question if one interpolation method is better compared to another in this setting … clearly your hand-waving, unsupported remarks are false, and rejected.

        Finally, I noticed you have completely missed the boat on the most important aspects of these data sets: The land-based are contradicted by the satellite based. If you believe that is “orthodox”, then I think you have bigger problems compared to what you have revealed here.

        Your point 2) No idea what you are trying to say here. What do you mean by “These triangles are small relative to radius”. What radius?

        In any case, the issue that you continue to fail on is, as indicated in the original post, at any instant, the planets temperature topography is a highly non-linear, and CRUCIALLY, non-monotonic surface. There are “mounds” and “valleys” all over the place. As such, the basis functions for discretisation are important. Based on the size of the triangulation shown in Clive’s charts, and based on the resolution in the satellite images, it is clear that the “raw triangulation” may fall afoul of the standard convergence/error analysis for discretisation. Put differently, you have NOT provided any proof or fact that allows you to assume that linear basis functions are sufficiently accurate or stable.

        Based on my experience of solving these type of PDE problems with Finite Difference (FD), Finite Element (FE), Boundary Element Methods (BEMs) and Boundary Methods (BM), etc, the triangulation shown in Clive’s images is too coarse to assume linear compact supports to be acceptable.

        BTW, why do you suppose they use higher order hermite etc polynomials in GCM’s?

        In any case, you would need to find a combination of small enough discretisation that allowed the use of Clive’s assumed liner compact support, which you have NOT done, so no idea how it is you think you can make your claims.

        Indeed, it is much worse still. While if restricted to purely an interpolation of some (say geological) “surface data” (as I indicated before), some greater degree of sloppiness might be tolerable. However, in complex multi-phase flows, and with likely hyperbolic character, variable coefficients, and anisotropic features, there will be highly non-monotonic features of the solution. This means (and if you actual know what FEM is, you should do this for yourself), one must perform a very careful analysis of the reduction of error and alteration of (numerical) stability/convergence with alteration of discretisation.

        … and since these are (highly) non-linear PDE’s, good luck, since the standard error/convergence etc analysis are limited to linear PDE’s.

        … making a sweeping hand waving statement as you have, strongly betrays you lack of understanding or experience in these matters … or is your response purely an emotional matter?

        I am absolutely certain that even with linear basis functions for triangulated FE problems, altering the orientation can have a profound effect on the result, particularly for non-linear, anisotropic multiphase flow … and God help you if any of the viscosity or other critical ratios are > 2 or so.

        … I am absolutely certain on the “grid orientation” issue, because I have actually done it, and proved it … and by the way, I did NOT use black-boxes, I wrote tens of thousands of lines of code from scratch with my own little fingers.

        HOWEVER, if Clive wished to avoid the ENTIRE set of complication with “vertex/centroid averaging”, he could instead keep each instrument/thermometer etc as the “centre” (or effective centroid). Then triangulate, or whatever, the raw data. However, now the actual raw data remains the “centre”, while the discretisation is used to determine the “domain boundary” of that weather station. Then, each thermometer, or whatever, is used directly, and simply weighted by the area implied by the “bounding calculation”. If you don’t change your instruments too much, then this “domain area” calculations need be performed only once, and also massively reduce your computation times/cost also.

        Your point 3) Although you have not stated so, I presume FEM means Finite Element Method (for me just FE). If you meant something other compared to FE, please clarify, but it seems to be the one that pops out at me and makes sense in this context.

        It was not my intention to bring FE into this discussion earlier, but as Nick Stokes has, I am obliged to provide a response, and it is rather more technical compared to the (exact same) point made earlier without dragging FE into it.

        … you are quite wrong, and clearly not much knowledge/experience with FE. The mapping that you cite is not the mapping that I am talking about, and I can’t see how you could not understand that if you have actually worked with FE.

        To begin, I clearly and repeatedly used the expressions Green’s Theorem/Stokes’ Theorem in the context of “vertex averaging to arrive at Clive centroid” and the connection of the “domain to the boundary”. That speaks to a complete different and more important matter compared to the trivial matter you seem to be caught up in.

        CRUCIAL RESULT NUMBER 1: Essentially, all Integral/Boundary methods used as solution techniques for PDE’s etc rely on variations of Green’s/Stokes’ in the sense that those theorems establish the following, and most important, result: Under certain reasonably general conditions, one can equate what happens in the domain (e.g. a “double integral”, to an integral around the boundary (e.g. a single integral). All of these methods tend to derive from Variational Calculus, but I will skip that bit. That is, there is a proof that we can equate, say, a 2-D problem to a 1-D problem. For example, a heat flow problem over some 2-D “cell”, can be equated to a (special) integral around the boundary of that cell.

        This can be performed for any general shape. If performed in a completely general global manner it is called the Boundary Method. If the (entire) boundary but only on the outer boundary is discretised then its BEM. If you discretise the interior and apply Green to the discretised interior, then depending on various other choices, you end with Galerking, FE, or the so-called Spectral Method used in some GCM’s for “horizontal spatial discretisation” (often still use FD for vertical and time)

        In the context of the present discussion, and as I stated previously, this equates to integrating around the outside of a triangle to determine the triangle’s interior, in the form of average temperature.

        If one arrives at that “centroid” average by averaging the vertex temps, than that is TANTAMOUNT to performing a form integration around the boundary, while assuming the vertices are connected by linear temp functions.

        In fact this can be seen directly in Clive’s 4th image on this page (http://clivebest.com/blog/) … he doesn’t number his images, but look for the “triangle”.

        This is fine, so long as you have first proved that linear basis function around the boundary of the cell is sufficiently accurate etc.

        Nick Stokes DOES NOT make any such proof (nor does Clive). In fact, the practice is to show the error effects for altering discretisation, and also varying the order of the basis functions.

        … if I had to guess, it may be that Nick Stokes does not have much experience with PDE’s or FE, and perhaps has only worked relatively benign problems (e.g. monotonic), and may have used and extremely finely discretised triangulation, which if performed correctly, would be suitable for benign problems (since small enough triangles with linear compact supports can, for some cases, still produce reasonable precision.

        THE SAME CANNOT BE SAID FOR THE GENERAL CASE, and Nick Stokes is wrong.

        Remember, Clive had switched to krigging raw triangulated temps. These form larger and irregular triangles, and the temp surface is certainly NOT benign. Therefore the ASSUMPTION that liner basis function are acceptable had not been proven, and is in some doubt.

        Here is a very simple example, with two scenarios:

        CAVEAT: a complete discussion with FE would necessarily require consideration of super-positioning “hat functions”, which also produce effect similar to linear, but I will not use those as they don’t change the results here.

        (i) Assume linear basis function, as before. Now the integral along one edge of the “tile” (i.e. 1/3 of the integration) will be the some linear function like y = a1*x + a0. This produces the exact same result as the vertex averaging and as per Clive’s “image 4” on (http://clivebest.com/blog/).

        (ii) Suppose instead assuming flat tiles is too inaccurate, and one must use boundary basis functions of, say, quadratic nature (in practice we have better choices compared to power series polynomials, but I don’t wish to dilute the story). That is, to get the value along one edge, the function now is y = a2*x^2 + a1*x + a0 (say as required by the implication of (b) (Missing) Curvature Bias, in my post above). Since this quadratic must join up with exactly the same vertices as the linear case, the mid point along this edge must be, necessarily, different compare to that of the linear case. Therefore, non-linear basis functions will produce different “centroid averages” compared to the linear (and thus compared to the “straight vertex averaging that equates to linear).

        Try it: what is the half way value of a straight line between vertex A of 10C and vertex B 5C … sounds like (10+5)/2 = 7.5 C to me. Now what is the half way value if there is direct quadratic hanging between 10 and 5. It must be 7.5 if concave-down, where the concavity would be forced, for example by the (cell’s) boundary conditions and continuity requirements at the vertices and/or due to flux etc factors along the edge.

        THEREFORE, accounting for curvature in the temperature topography will necessarily produce different “centroid” averages, if we stick to proper Greens Theorem etc.

        … Once again, Nick Stoke is wrong.

        Your point 3 b) & c): You have really dropped the ball here, mate.

        Let’s begin by the big picture objective: A Global heat balance. This is restricted to various laws of Thermodynamics and conservations principles. Before we get some of those, notice that:

        (i) The raw temp data is krigged in space, which is linear regression method of sorts. The result is the “trend” of that regression (i.e. MLE) in space. Clive also states that he “averages over time” to obtain a trend in time. All these averages mean that there are bits left of over. Notably, an MLE (e.g. regression) will also yield “residuals” (i.e. error terms), and which tend to be “dropped” by most people (and no way to include easily to ensure conservation principles anyway).

        Now, before we even look at the transport equations, we can say with some HIGH CERTAINTY, that if Thermodynamics NECESSITATED a heat balance, and you have created ONLY various AVERAGE temperature (and thrown away the residuals to boot), one can be quite certain that the “averages” do not represent the exact heat fluxes. So just on this point, we have broken thermodynamics.

        Next, nowhere in thermodynamics does it say that “averages” are preserved.

        Moreover, for all these averages of averages to exist, it must be implicitly assumed that there is thermal equilibrium, which we know is not possible.

        SO JUST at this stage, much thermodynamics have not just been broken, but tossed out with the bath water, and flushed down the toilet.

        (ii) Keeping in mind that is the heat flux (and mass and momentum fluxes) that must be determined, since it is only once those are nailed down can we say anything about temperature, and we know, these flux occur in a number of forms, such as diffusion and convection, we can say with high certainty that the model equations (I’ll just write this in 1-D to avoid confusing anyone with complex symbology) (I am not sure which of these formulas will render correctly on Clive’s blog, but I am sure will get it aesthetic eventually):

        …+ d( a(v) du ) + b((v) d u) + qu = g d u
        d x2 dx dt

        < (there is an equation object here)

        where u and v are vector valued, and include T, p, CO2 concentration etc. Of course, this would normally be written in its 3-D variant, along with auxiliary equations, and of course IC's, and BC's.

        The first term represents diffusion, the second term represents convection. Clearly, the first term, being a second order derivative (or more generally a partial), implies the requirement of a solution that is continuous in the 2nd order.

        The simplest solution of a 2nd order PDE is a quadratic in, say, Temp. Therefore it is not possible to have quadratic continuity if you are forcing linear continuity … again, Nick Stokes is wrong.

        Even if a sequence of linear piece-wise continuous elements approximates a quadratic, it is not 2nd order continuous. Try taking the 2nd derivative of sharp corner where two straight lines intersect … again Nick Stokes is wrong.

        Thus, if the temperature topography of the planet is to be consistent with the Laws of Thermodynamics and conservation principles, then there will requirements of degree of continuity in the solution.

        Moreover, since these are non-linear PDE's, and many with hyperbolic character, anisotropic features, etc, there may be requirements for an even higher order of continuity.

        In these cases, if the temperature topography is compared to a bunch of linear tiles with linear boundary basis functions/compact supports, then there is no chance that the solution to the PDE can be consistent with the krigged etc "linear/averaged" temperatures.

        … again Nick Stokes is wrong.

        Now then, Mr Nick Stokes, if you wish to disagree with any of this. Unlike your previous approach, please be so kind as to:

        1) Use actual data, equations, and a precise language

        2) Please demonstrate specifically how the data, equations etc you provide are at odds with my submissions.

        BTW, just taking a step back and thinking about the "big picture". If what you say was correct. Then all complex multiphase flow dynamical problems could "solved" by the simple krigging/linear-averaging presented by Clive … I would be very impressed if that was proven true … but really, what are the odds, and can't you actually see that can't be?

        • Nick Stokes says:

          Well, again, just a few things.
          “Stokes Theorem has absolutely no relation to you (Nick Stokes). “
          Not entirely. Sir George Gabriel’s great grandfather was an ancestor of mine.

          “I have been performing mathematical modelling for more than 45 years. “
          Me too.

          “The land-based are contradicted by the satellite based.”
          Nonsense. They are measuring different places.

          ““These triangles are small relative to radius”. What radius?”
          The radius of the Earth. Clive is triangulating a sphere.

          ” the triangulation shown in Clive’s images is too coarse to assume linear compact supports to be acceptable.”
          In fact I have just posted a study of such integration here, although more numerical tests are here.

          “since these are (highly) non-linear PDE’s”
          They are not PDEs at all. It is an integration of the surface temperature anomaly monthly average.

          “I wrote tens of thousands of lines of code from scratch with my own little fingers.”
          Well done. I write code too. You can find the code for my global temperature analysis here, and in the two preceding posts.

          “you are quite wrong, and clearly not much knowledge/experience with FE.”
          On the contrary, I led a CFD group in CSIRO for many years. I wrote, also with fingers, the FEM code Fastflo, which was NAG’s PDE product for a number of years. It ceased to be distributed when I retired about twelve years ago, but if you google my name and Fastflo, there is still some reference to it on the web.

          “CRUCIAL RESULT NUMBER 1: Essentially, all Integral/Boundary methods used as solution techniques for PDE’s “
          Again, we aren’t solving PDEs here.

          “it is called the Boundary Method”
          One useful attribute of the surface of the sphere is that there are no boundaries.

          “Therefore, non-linear basis functions will produce different “centroid averages” compared to the linear”
          It might. What Clive is doing is equivalent to 2D trapezoidal integration, which is perfectly adequate to this task. With just three data points per triangle, there is no information to extend to higher order.

          “The simplest solution of a 2nd order PDE is a quadratic in, say, Temp.”
          ??? The simplest solution is usually a constant.

          “Then all complex multiphase flow dynamical problems”
          We are not solving complex mutiphase problems. We are integrating monthly average temperature anomalies over a surface.

  4. DrO says:

    ==> In response to: Nick Stokes says: (March 23, 2017 at 11:23 am)

    ASIDE… Clive : I have taken a slight liberty in posting this as a “new post” rather than as “reply”, since the replies are nested indentations, the longer material make for “very tall/very narrow column of text”, which I fear might be too hard on some eyes. If there is any objection, let me know (e.g. at my usual email) and I will change it.

    Before I get into the thick of it, as already indicated above, there are serious doubts as to the value/meaning of ANY variation of Global Average Temperature (GAT). As the matter has unfolded, some people seem extremely emotionally wedded to GAT and to certain techniques. As that subject is more properly a separate matter/thread, I will provide some notes later, and perhaps Clive might consider starting a post/thread on that topic.

    … just for starters:

    1) Is there any value of any sort whatsoever in GAT?

    2) Can GAT be used for any sensible or practical purpose whatsoever?

    Or

    3) Is GAT just meaningless mumbo-jumbo (even if there existed a “perfect” interpolation/averaging scheme) packaged in unnecessarily complex (notice, not sophisticated) machinery that produces “pretty pictures”? If so, is it past time to bin all this GAT nonsense?

    … I’ll provide a starting point for this later.

    Now, for the “thick of it”, and Mr Nick Stokes Reply: My apologies to all.
    ====================================================

    As I have found Mr Nick Stokes submission to be demonstrable dishonest, and indeed, what appears to be a veiled ad homonym attack, I am obliged to provide an appropriate response. I would have preferred to stick to science, but I have been given no choice.

    Not to worry, this is the one and only/last time I shall do so with respect to Mr Nick Stokes. After this, I shall “switch to ignore”, unless there arises some formal/official reason to do otherwise.

    It is possible, Mr Nick Stokes is one those “must have the last word” types … be my guest.

    The proper and honest treatment of the errors and dishonesty requires a lengthy, detailed, fact based, and reasoned derivation. All of which may bee too long for many readers (I can understand many just won’t care, I get it, but I am obliged to defend my honour in the face of scoundrels). This is in dramatic contrast to Mr Nick Stokes’ terse style of “off handed dismissal/dishonesty” of anyone who disagrees with him. So, I first provide a summary of selected issues, which are of two types: dishonesty, and astonishing mathematical incompetence.

    It may be of some value to note that, having followed the links provided by Mr Nick Stokes, there exists a substantial collection of commentary by those who are like-minded with Mr Nick Stokes, and who promote and conduct there activities not only with the well-established methods of dishonesty, and anti-science more commonly seen with the IPCC et al, and by Mr Nick Stokes, but also exhibit a sort of “tribal narcissism” and “politics of outrage” that is entirely outside of science and reason, and can be summarised a purely vexatious, and in some cases as pure cowardice.

    … once again, as if any more was required, we have thoroughly compelling evidence of how science can be hijacked by politicos, and as some of the examples demonstrate … by just plain vexatious individuals (e.g. whoever “@whut” is).

    … even if we find charity for all of the dishonesty and incompetence, I once again emphasis the well established observation:

    “For every complicated problem, there is always a simple solution, but usually it’s wrong”

    Dear Mr Nick Stokes:

    Wow, Mr Nick Stokes, I expect you need to deal with somebody who has a different sort of doctorate compared to mine.

    Although there were signs of dishonesty in your previous submissions, earlier I elected to take the high road and give you the benefit of the doubt that you were simply wrong, rather than outright dishonest.

    My value system/duty, and professional courtesy oblige me to explain myself and give the reasons for my strong statements. After that, it is good bye.

    Astonishing Mathematical Incompetence Summary: I think some of these may be driven by various aspects of dishonesty (see below), but I will treat them as “mathematical issues”, some of which can be summarised as (with details further below):

    1) Mr Nick Stokes claims that the “usual solution” to a simple 2nd order DE problem is a “constant” … wow! That is just too weird for me. I think already 17-year olds know better, certainly any engineering undergrad would tear that to shreds (see below).

    2) Mr Nick Stokes does not understand that a spherical shell/surface has two boundaries (particularly the “shell” we call the atmosphere), but of course his remarks are almost surely the result of dishonesty.

    3) Mr Nick Stokes does not seem to understand that tiles/triangles have boundaries, and the core of the discussion above is about those boundaries, not the boundaries of the surface of a sphere (though a much more detailed presentation on a different subject could show one can deal with that as well, even if Mr Nick Stokes is ignorant of it). Is he so incompetent as to not know the difference between boundaries of tile vs. boundaries of a shell, or this another misdirection/dishonesty, and posturing for a thinly veiled ad homonym attack?

    4) Mr Nick Stokes presents himself as some sort of expert on FEM, but he has failed to grasp a crucial aspect of FEM, being the integration along the boundary of a tile/cell to evaluate/solve properties of the domain encompassed by that boundary … i.e. Green’s Theorem. For Mr Nick Stokes to fabricate that there are no “boundaries” here, is just too weird (wrong, and dishonest).

    5) Mr Nick Stokes does not understand that FEM is just one flavour of a class of boundary methods. Nor does he understand that one can, and often prefers, to “remove boundaries entirely”, even when they exist (e.g. put them at infinity).

    He also has some very odd ideas about surface fitting and averaging, as presented on his blogs. It is not clear why he can’t figure out a proper fitting/averaging of the planet’s temperature topography. Perhaps it is just too hard for him, or ??? Proper techniques exist, so I am not sure why he insists on ramming a massively over smoothed version of the data down our throats, and which is clearly contradicted by any and all instrumented surface temperature topographies.

    It is noteworthy, that I pointed out to Mr Nick Stokes that he needs to provide convergence, error, stability analyses and use proper data, amongst other things. Following that, he provided links to his work: Many there, from years ago already, had pointed out to him that his methods require “proof” (and use other words to make the “Gödel” point above), require proper use of data, and require proper error analysis. Though on those blogs, Mr Nick Stokes and colleagues seem astonishingly resistant to all these core and standard requirements for proper mathematics/science, exactly as he is resisting here.

    … alas, this is the world of climate fanaticism and dishonesty, and I suppose nothing should surprise me after the AR4 and AR5, but it does sadden me to see so much unscrupulous behaviour.

    Dishonesty Summary: The dishonesty in Mr Nick Stokes’ responses takes several of classic forms of dishonesty easily observable in, especially, climate science/fanaticism. Some of these can be summarised as (with details below).

    1) The fabrication dishonesty, including its cousins:

    a) The “put (fabricated) words” in somebody’s mouth dishonesty

    b) The “Jabberwocky dishonesty” fabricating some nonsense that sounds like it might be something but in reality it’s nonsense dishonesty

    2) The carefully “cherry picked” “out of context” dishonesty to pretend that “black is white” or “up is down” type of attempt to change the true meaning.

    3) Something I can only refer to as “cuckoo-bananas” dishonesty, see “ancestry”, “Nag”, etc. such as Section F) below.

    Some Detailed Points and Proofs
    ========================

    I will change the order of the errors, lunacy, and dishonesty in Mr Nick Stokes’ response, as some of it is rather easier to deal with, such as the preposterous errors in the simplest/most basic mathematics and concepts:

    Examples of Nick Stokes math lunacy

    A) The Simplest Solution of a 2nd Order PDE lunacy
    —————————————————————–

    Mr Nick Stokes writes with the double quote extracted from my submission by Mr Nick Stokes:

    ““The simplest solution of a 2nd order PDE is a quadratic in, say, Temp.”
    ??? The simplest solution is usually a constant.”

    I can’t believe he could say something this WRONG. If I said something this stupid, I would be apologising profusely and hanging my head in shame. To witt: I had actually written a second order DE in a slightly more general form, with the second order term:

    …+ d( a(v) du )
    ————–
    d x^2

    let’s make this the “almost simplest” form, and in a form almost anyone should recognise, as per:

    d^2u
    —– = 1
    dx^2

    Integrate twice to get:

    x^2
    u = —– + C x + D
    2

    where C and D are constants.

    Clearly, the SIMPLEST SOLUTION is a QUADRATIC.

    So Mr Nick Stokes seems incapable of even A-level (for Brits)/high-school (for Yanks/Canucks) basic calculus.

    I suppose we could mess about with the two pathological and the trivial solutions, but that would be wrong in this context, and certainly not “usual”.

    In any case, “usually” (i.e. ALWAYS for non-pathological), the SOLUTION is a QUADRATIC.

    A more general solution would typically involve sums of transcendental functions (e.g. e^-a x*( Sin(Ax) + Cox(Bx) etc etc, but those, generally, will have at least third order curvature (i.e. cubic) over various intervals, as can be easily seen with the presence of inflection points. Thus, any surface fitting and Green’s Theorem basis functions would then require even higher order for “proper” GAT (if such a thing actually existed).

    If we move to PDE’s (aka infinite dimensional problems), “a usual” solution would be a kind an extension of the above, such as via Fourier Series. I make this point here, since there is an interesting connection to Mr Nick Stokes “proud claims” about Spherical Harmonics and the GAT time-series, and which demonstrates the lunacy of the entirety of climate modelling, GAT, etc. That is, these Fourier Series matters provide yet another mechanism that demonstrates the entire problem to be aperiodic, and that necessarily, any reliance on Fourier or related methods (e.g. Spherical Harmonics) is GUARANTEED to FAIL. That story is a little lengthy and technical, some of which is provided below, and I may produce a more detailed pedestrian version in the future.

    I provide some explanation of this point in B) below. If anybody is in a hurry for that discussion in greater detail, let me know.

    However, one doesn’t even need to know even very basic calculus, as how can one ever imagine that the “usual solution” to general 2nd order (P)DE’s is a “constant”. If that was true, none of us would have a job and virtually all models, and problems in physics, engineering, etc would have been solved ages ago … Mr Nick Stokes is just too weird for me.

    B) The Sphere does not have a boundary lunacy
    ———————————————————-

    Mr Nick Stokes writes, with the cherry picked quote from my submission that:

    ““it is called the Boundary Method”
    One useful attribute of the surface of the sphere is that there are no boundaries.”

    Shear lunacy.

    Of course the surface of sphere has boundaries, in fact, it has TWO BOUNDARIES. Especially the bit we call the atmosphere. Ask your 7-year old next time you are blowing soap bubbles. I am more inclined to believe that Mr Nick Stokes does know this, but that he is engaging in dishonest behaviour in an attempt to prove “black is white”, etc., and fabricate some nonsense/misdirection on which he can pretend my work is flawed.

    However it is much worse still. While the “sphere” comment is quite an obvious bit of lunacy, the entire comment being cherry picked from a longer statement on solution techniques for PDE’s. Of course, and apparently unknownst to Mr Nick Stokes, FE is a special subset of boundary methods.

    HOWEVER, in reality that (cherry picked) bit was preceded by a detailed discussion of purely surface matters. His cherry picking is completely out of context. Also, in my original post, before words were put in my mouth, I explained purely in the context of surface fitting/averaging, that one can use Green’s Theorem to integrate along a boundary to obtain results about the domain encompassed. At that first point, my submission is clearly just Green’s Theorem, (of the discretised/triangulated sphere) and discussing how LOCAL averaging of vertex values to obtain a centroid average is just a special case of Green’s Theorem. That is, as I had explained, a full (line) integration around the boundary of each triangle is the generalisation of how to obtain the “vertex/centroid average”, but with the possibility of more sophistication, and especially to allow for difficult non-linearities in the topography of the actual/real temperature distributions, as can be easily seen from any and all satellite images at any instance.

    Therefore, and which must be all too clear, the “BOUNDARY” in question here, is the boundary if each of the triangles/tiles, not the sphere. Again, can he really be that incompetent, or is this more trickery for his self-serving purposes?

    Bizarrely, Mr Nick Stokes seem particularly proud of his GAT’s on his web pages, and which he refers to as FEM, as in Finite Element Method. This is too weird. FEM on a surface is exactly built on Green’s Theorem. Thus, he has condemned my work, even though for this portion of the issues, what he is so proud of, is exactly what I am talking about, but he doesn’t even understand that FEM is built on Green’s Theorem.

    Mr Nick Stokes is talking lunacy, there are plenty of BOUNDARIES here, and the ones I am referring to are identical to the ones in his “masterpiece”, such as it is.

    … put differently to condemn my work in this way is to condemn his own “proud” work
    … too weird.

    Even worse still, though as Mr Nick Stokes does not appear to know anything about, he still felt that he should talk about something he does not know anything or much about. In my submissions, I touched on several methodologies for solving PDE’s that rely on boundary to domain relationships. I cited FE, BE, and BM. The last being the Boundary Method. Mr Nick Stokes has quoted exactly that and then makes his ludicrous remark about spheres (i.e. cherry picked out of context of discretised sphere). In fact, even if we take the matter completely out of context, the Boundary Method does not actually need to be at the “boundary”. It is, in simple terms, a super-positioning of singular Green’s Functions, and often to relate the domain to the boundary, but the poles of the singularities need not be, and usually are not, on the boundary. What a shame he does not know what he is talking about.

    Even worse still, I don’t know of a single 3rd year undergraduate engineering student, and certainly not a single person who performs mathematical modelling of transport phenomenon for a living, or even sporadically (e.g. heat, mass, momentum transfer) who does not rely with some frequency on “infinity domains”. That is, to simplify many problems, one assumes a form of symmetry and allows one or more of the boundaries to be at infinity.

    Indeed, all those I know studying or working in climate/met science all seem to be aware of the famous work by Lorenz. The entire derivation of the celebrated Lorenz Equations hinges critically on assuming boundaries at infinity to solve a simplified version of the Rayleigh–Bénard problem. So it is hard to see how even climate modellers would not know about these types of tools/methods, even allowing the trickery of cherry picking/out of context.

    Finally, to come full circle (or “full sphere” 🙂 ), though the details are rather technical, mapping on a sphere has singularities (e.g. infinities at the poles). While I had not gone into detail with my earlier remarks due to the technical digression required, even mapping between a “flat triangle” and a “spherical triangle” general cannot preserve all aspects, and therefore “equality is lost” (i.e. there will be “distortions” in one more metrics). This is a well established result amongst the many mathematical treatments of such mappings.

    “Full mapping” of a sphere onto a flat (Cartesian) plain, with, in highly oversimplified terms, must manage/account for singularities, which are also sometimes accommodated via boundaries at infinity. Thus, the “boundary at infinity” method equates to a sphere in some cases. Moreover, Boundary Methods rely in singular Green’s Functions, and so directly incorporate singularities, and thus one of the reasons they are so useful.

    It also rather bizarre that, as Mr Nick Stokes is so proud of his (quasi) Spherical Harmonics (SH) (see also below on his “regression through SH’s”), he has failed grasp the entire singularity matter. SH’s are used in climate and other studies for more than one reason, and to achieve different objectives. On the simpler matter of mapping on a sphere, SH’s can be used to make the “management of the singularities” at the poles somewhat easier. Of course, SH’s being truncated Fourier Series (FS’s) can’t actually represent singularities in a general sense, since the only general manner in which a FS can represent a singularity is by retaining the entirety of the infinite series (that is a proper FS). Put differently, truncating an FS almost surely destroys its ability to represent a singularity.

    Of course, whether one knows it or not, there are different types of singularities: some are “more difficult” compared to others.

    The problem with SH’s in broader climate modelling is much more sever, and indeed fatal. In particular, many GCM’s implement an SH variation of FE to solve the climate model PDE’s “in the horizontal” (they still use FD in the vertical, and in time). The reason they use FE/SH in the horizontal is MOSTLY to help reduce computational time. Although at the expense of loosing much important, and indeed “show stopper” information.

    In particular, since much of the “climate problem” is canonically hyperbolic, there is need to deal with step functions, and the like. These represent more difficult singularities. Notably, they CANNOT be represented by truncated Fourier Series. Just Google the “Gibbs effect”, and see below.

    In order to accommodate the climate models more correctly, one would wish to move away from FE/SH. FE/SH, while reasonable for (mostly linear) elliptic PDE’s, are generally poorly suited for highly non-linear hyperbolic and/or variable-coefficient PDE’s etc (as much of the climate is). To handle other canonical forms, and especially highly non-linear, and with variable coefficients, one must very much abandon FE/SH, as it is well established that boundary methods in general do poorly in those cases. Instead, one must move to FD etc, which of course, requires more CPU time compared to that easily available on their mountains of super-computers.

    In a sense, you get what you pay for. You want a cheap solution, use FE/SH, but then be prepared to throw the baby out with the bath water.

    To be fair, purely elliptical problems are more efficiently managed on complex geometries with FE compared with, say, FD. That is why civil engineers etc, use mostly FE to solve “beam loading” etc problems (strictly speaking those are “bi-harmonic” PDE’s, but that’s for another day).

    In fact, in the case of the climate, we can prove that using virtually any numerical method that relies on any form discretisation (e.g. triangulation, or whatever) of the planet (and time), whether FE, FD, or whatever, must necessarily be fatal. One would have thought that Mr Nick Stokes’ bold statements about SH etc would have necessarily apprised of him of this.

    To see this, it is possible to show that, for a wide range of situations, Fourier Series (FS) are solutions to PDE’s. FS’s are infinite series. For “fundamentally stable phenomenon”, there may not be any high frequency FS components, or they may make a vanishingly decreasing contribution to the solution with increasing frequency.

    HOWEVER, fundamentally UNSTABLE phenomenon, of which we can prove the climate to be an example of, the higher frequency terms in the FS are not only important, in many cases the importance of those terms INCREASES with frequency. THEREFORE, any TRUNCATION of a FOURIER SERIES solution necessarily destroys the most critical/important portions of the solution … your not just throwing the baby out with the bath water, you are throwing the entire “basin” out … baby, water and all.

    I comment further on these points in E) and F) below.

    No matter what angel I look at Mr Nick Stokes’ remark, from seven-year old to advance modeller, I can find no way to show anything other than lunacy in his statement. Moreover, the sneaky out context cherry picking is, for me, disturbing.

    … As soon as Mr Nick Stokes cited Boundary Methods, the matter is in direct connection to my PDE discussions. However, I expect, Mr Nick Stokes will respond to these well-established results and proofs in spherical mapping and (climate) PDE’s, and which crush his position, with “we are not solving PDE’s” … that is complete BS, and I speak further on this point also below.

    C) The “land-based vs. satellite-based”
    ———————————————-

    ““The land-based are contradicted by the satellite based.”
    Nonsense. They are measuring different places. ”

    This is shear dishonesty.

    First, notice the careful cherry picking. In my submissions, and Clive’s post, the central issue is the calculation and validity of the Global Average Temperature (GAT). I had very clearly proven that land-based GAT is CONTRADICTED by the satellite based GAT. I had proven this with direct UAH data compared to Clive’s direct GAT results. I had also proven this separately with the results from Dr Humlum, with direct links/citations to alternate/independent data sets/charts etc.

    It is UNMISTAKABLY CLEAR that the land-based GAT’s are diverging in a disturbing manner from the satellite-based GAT since about 2005. All of which is explained by the recent massive increase in the so-called “Administrative Adjustments” (AA’s) (to allow the IPCC et al to use bent data to pretend their models work). Indeed, one must reasonably ask if the bulk of the AA manipulation is due to “krigging/interpolation/averaging trickery”, as I did in my original submission.

    So there is no question: The CONTRADICTION is emphatic, and with much data mangling by the land-based crowd (which seems to include Mr Nick Stokes, based on the comments on his web pages).

    However, much worse for Mr Nick Stokes is his comment that

    “Nonsense. They are measuring different places. ”

    I apologise, but I can only say: that is incredible stupid. How can GAT’s be taken at different places? There is ONLY ONE Global Average Temperature, its not taken at any place. You either have a GAT or not. Mr Nick Stokes’ comment is preposterous.

    Now, let’s suppose we allow him the charity that he made an honest mistake, rather than the dishonest cherry picking/fabrication, by imagining that he is talking about the actual points on the planet where instrumentation takes readings. In this case, he has gone completely “off the reservation” since, and as he does not strike me as the “stupidest person in the universe”, I cannot fathom how he could be so wrong about what Clive and I have written specifically speaking about GAT.

    Moreover, there is much discussion/explanation in my submission explaining the AA’s. Thus, even if we were ONLY talking about distributed readings “in different places”, it is still utter nonsense to say “different places”. It makes no difference whatsoever what the locations are, if you are massively manipulating the data with AA’s in the land dataset, and not in the satellite data set … THAT IS THE ENTIRE POINT of QUESTIONING KRIGGING as a TOOL FOR AA’s.

    … and so, again, we are obliged to consider the explanation of his actions as incompetent or dishonest, or ??too weird??

    D) The “its not a PDE” dishonesty, and ” We are integrating monthly average temperature anomalies over a surface” dishonesty.
    ——————————————————————————————————-

    I’ll not reproduce all the statements, but at several points Mr Nick Stokes makes comments of the sort:

    ” Again, we aren’t solving PDEs here.”

    Similarly, he complains that my references to multiphase flow, etc, are effectively inappropriate.

    a) To begin with, this is categorically wrong/dishonest. The early portions of my remarks focus solely on the issue of higher precision interpolation/approximation expressly in the face of the more proper approaches, and do not invoke DE’s etc. Indeed, I make the comparison to some geologist taking surface points to describe topography. I do also extend the matter to point out that, IN CASE one is going to consider this matter in the context of climate science, then Clive’s (HadCRUT’s etc) approach is clearly and necessarily in breach of thermodynamics.

    Mr Nick Stokes got that wrong as well, as some people seem to think they can just average/linear regress “everything” and still preserve conservation principles … that is a “bush league” mistake.

    For Mr Nick Stokes to omit this is at least incompetence, or as is the pattern, more cherry picking dishonesty to try to pretend that “black is white”. Indeed, one might ask why Mr Nick Stokes is promoting “irresponsible science”.

    b) It is clear from my emphatic and emphatically repeated pleadings with Clive that, as responsible scientists, we should be very careful of the dangerous implications of publishing what looks like math, but, unfortunately, tracks in the “climate nonsense detritus”. This point comes up over and over again. The moment one is explicitly or implicitly “sucked in” to the climate modelling issue, one cannot extricate themselves from PDE’s.

    One of the most bizarre, but all to common “lunacies”, and one which arises with astonishing frequency and insistency on Mr Nick Stokes’ blog can be summarised as this: They say that the temperature topography should be created by (climate) models, rather than from the instrumented data. Then, they say, their interpolation/averaging methods to calculate GAT should be based on those model produced temperature topographies. Of course, since the climate models are far too simplistic and arbitrary, one can use them to create any old rubbish, and indeed, any old “high resolution rubbish” with a “smooth rubbish topography”. In that case, all these Spherical Harmonic/linear regression/vertex averaging methodologies can be shown to produce “good agreement” with (rubbish model) temperatures.

    … One might be tempted to scream – OMG! As that entire mind set and “way of life” is not just anti-science, and indeed anti-truth, it is utterly delusional. Alas, that is climate fanaticism, and Mr Nick Stokes and his crowd.

    c) Mr Nick Stokes also states:

    “We are integrating monthly average temperature anomalies over a surface”

    CRUCIALLY, I had also presented an alternative consideration, and one that is completely devoid of PDE’s, but which Mr Nick Stokes has mysteriously omitted. In particular, if you wish to omit PDE’s, then what is the POINT of all this krigging and averaging in this climate science context?

    NOTABLY, the HadCRUT data includes their GAT and also their 95% variance/error, which is around +0.4C. Clive’s krigging experiments seem to alter these GAT values by only a tiny fraction of that error/variance. Indeed, Mr Nick Stokes’ “highfalutin” (quasi) spherical harmonics etc, vary from one another by at most around +0.1C.

    Put differently, the HadCRUT error bars are massive compared to the “benefits” of Clive’s (or Mr Nick Stokes’ insistence) that there is some value in dealing with krigging/GAT in isolation (e.g. detached from PDE’s). The HadCRUT and data variance CRUSHES that notion directly, and with that crushes Mr Nick Stokes’ cherry picked contrivance.

    To be clear, if your method only makes small changes in the values, and produces results well inside the error bars, then you cannot say anything about “which is better”. Even worse, the entire HadCRUT/IPCC data-sets are highly suspect (e.g. AA’s), and NOT proven to be correct. So, regardless of your method, you CANNOT say you produced any benefit, since your are comparing to “thin air”. Even worse, there are other data-sets (e.g. satellite), that contradict the land-based, and the land-data you have ASSUMED (but not proven) to be correct, but which are very different from completely independent sources. Therefore, any “improvement” as you imagine, over the “thin air” data, in the face of independent data that contradicts/decries your “thin air” data-set, is “exponentially dubious”.

    E) “In fact I have just posted a study” disinformation.
    —————————————————————-

    Mr Nick Stokes gives the impression that he has produced results that speak to important provable problems in krigging and some of Clive’s experiments.

    This appears to be his response to my submission that one must produce various convergence, error, and stability analyses on the methodologies, before one can say anything about (numerical) methods/solutions.

    Mr Nick Stokes states, with the doubly quoted bit from my submission:

    ” ” the triangulation shown in Clive’s images is too coarse to assume linear compact supports to be acceptable.”
    In fact I have just posted a study of such integration here, although more numerical tests are here.”

    Given the degree for incompetence and dishonesty repeated by Mr Nick Stokes, I am not prepared to invest too much time in his publications and claims.

    However, I have taken at least a cursory look at his links, and the commentary by others therein. While it would be un-scientific of me to make flat-out conclusive remarks, I believe the following can be put forward for serious consideration:

    1) Mr Nick Stokes does not provide in anyway that I can tell any test that prove the linear interpolation is “acceptable” in any important sense, particularly compared to other methods.

    2) I don’t see much evidence or error, convergence, or stability analyses, which is the critical starting point for such matters, and what he claims he has done.

    3) I do see some rather odd things. It is possible that deeper analysis might show these oddities to be reasonable, but for the moment:

    a) I am not sure if Mr Nick Stoke (as with Clive) actually understand the difference between “interpolation” and “approximation”. Of course “climate guys” seem to make up whatever language they like, and abuse any language they like, but still, a couple of points:

    (i) Interpolation, in proper numerical methods, requires by definition that the fitted curve goes through each and every data point. Cubic splines is a well known example.

    By contrast, in proper numerical methods, Approximation methods intend to fit the “trend” etc, and need not fit the curve to each and every data point exactly. Linear regression is the quintessential example of this family of methods.

    Thus, for example, since krigging relies on regression, it is an approximation method. This is also why I was surprised to see Clive refer to krigging as a spline, which is very clearly NOT allowed via regression.

    (ii) In much the same way, Mr Nick Stokes seems particularly proud of his use of Spherical Harmonics (SH). However, he then does something quite odd. He goes to all the trouble of applying SH, but then he whacks linear regression through it … HUH?

    I don’t know, perhaps on deeper analyses one might find some sense in all that. However, I have to wonder if Mr Nick Stokes SH/regression machinations are not just his version of bizarreness of the sort that lead to the invention of krigging.

    So far as I can tell from the literature, the met guys were trying to fit temps regionally. They tried splines, as that sort of thing is often an obvious first choice. However, the literature is rather vague on the specifics, but basically suggests that splines produced aesthetically unacceptable results (I could not actually find any proper error etc analysis, only that they did not like the “look of it” or “it didn’t look right”).

    Without knowing exactly what they did or found, I would guess from experience that they generated spurious oscillations via splines, which would certainly produce something with an undesirable “look of it”.

    In any case, as they did not “like the look” of spline results, they decided to just whack a linear regression through it. Of course to do that, it required some “starting guess”. The nature of the starting guess varies depending on the “flavour”, but this they call “krigging”.

    In short, krigging is, often, a circular methodology, since one has to guess the thing you are trying to determine, and do so via linear regression, which must then necessarily build in the bias/circularity.

    … once again, I stress that I have not been able to find the detailed derivations, work, analysis on krigging and its “birth”, but the above is a rough version of my understanding based on the papers that I have.

    This allows us to come back to Mr Nick Stokes. Does he linearly regress his SH for the same reason krigging was invented? Too weird.

    Indeed, on those Mr Nick Stokes blogs where he claims he has performed the numerical analyses I call for, but clearly has not done so, there are some useful additional indicators of the people on that blog and what might be going on:

    a) There is astonishing resistance to fitting curves/calculating GAT against real data. Instead, many are quite insistent on GAT of model produced values.

    b) There is astonishing resistance, to testing goodness of fit via the usual well proven techniques. Some of the commentary there suggests they may actually do some of that, but as yet (years later), there is no evidence that I could see.

    c) Mr Nick Stokes appears to be rather resistant to satellite data, even when others have presented the importance repeatedly over the years.

    … haven’t any of those people heard of any of the adaptive-corrective methods? Surface fitting can be tricky, but its not “rocket surgery”.

    … any way, having suffered through all the AR’s and more amusingly all the AR “reviewers’ notes” (very entertaining, but so sad) not much surprises me about the “climate fanatics”.

    In short, once again, in spite of Mr Nick Stokes confident claim that he has performed important numerical analyses, it is just NOT so, and some parts of it are just too weird, and mostly or completely devoid of proper classic mathematics, and in some parts, devoid of sanity.

    F) The “Personal Issues” Problem
    —————————————–

    I consider it rather a bad mistake on Mr Nick Stokes’ part to introduce an array of personal information into what is (or should be) otherwise an honest technical discussion. This is the sort of thing that can come back and bite you in the butt big time.

    While I am always supportive of people’s pride in their craft, I offer a few examples in connection with Mr Nick Stokes’ remarks:

    1) Nag etc

    There are a number of self-defeating problems with Mr Nick Stokes’ citation of his time with Nag et al:

    a) Even before I started my graduate work, we would from time to time (try to) use various numerical libraries like Nag, IMSL etc. Very often our code would fail, and we had terrible difficulty tracking down the bugs. In the event, the bugs were in Nag etc.

    Of course we would send the bug reports, and also the correct code etc, to Nag etc. So the Nag code, at least the parts that work, may work not because of Nag, but because we had suffered broken results and fixed their code for them.

    Should anyone be taking credit for someone else’s work?

    So Mr Stokes’ claims should be taken with at least a grain of salt, when he talks about his time at Nag as if that should be impressive or not.

    b) During our investigations, we discovered that much of the code in those libraries was actually an identical or near identical copy of public domain routines/libraries, such LinPack/LaPack, NSWC, etc etc.

    Should anyone be taking credit for someone else’s work?

    c) The elliptic PDE solver Mr Nick Stokes speaks so proudly of, well, its not nothing, but there are “your amount” of free elliptic PDE solvers out there. Elliptic PDE’s, particular for linear/near-linear, and mostly constant coefficient (elliptic) PDE’s are in many ways much easier problems compared to non-linear, variable coefficient, anisotropic, multi-phase flow with range of elliptic, parabolic, and most scary, hyperbolic canonical forms.

    The Nag online material implies it can handle “turbulent flow”, but that strikes me as somewhat misleading. Proper treatment of turbulent flow almost surely requires solving hyperbolic problems, which can’t be done with an elliptic solver, at least not directly. I am assuming that they have incorporated some statistical/regression of other means to average out the wave/hyperbolicity, or created a proxy via dimensionless parameterisation, or ?? etc.

    I could find no indication of its ability to handle heat transfer, mass transfer etc, and certainly no evidence of any multiplicity or combination of such issues.

    Indeed, Nag and Mr Nick Stokes speak of that library in the context of Computational Fluid Dynamics (CFD), so it seems it is restricted to just fluid flow problems.

    In CFD, a fluid can be a liquid or a gas. Gasses are compressible, and also for other reasons, pose some more complexities. I could not see if their software could handle gasses etc.

    Even with just liquids, if there is more than one liquid, and suppose the viscosities differ by a certain amount, the nature of the problem becomes decidedly hyperbolic. Again, not something an elliptic solver is suited for.

    This sort of list goes on and on …

    All of these issues exist simultaneously in the climate, and so cannot be considered remotely anywhere near a pure elliptic.

    That they dealt only with elliptic issues, may also explain Mr Nick Stokes’ deep ties to FEM/Spherical Harmonics, as those are the standard “first go to tools” for elliptic problems. However, that does also suggest that his lack of exposure/experience “weds” him to these limited and woefully wanting methods in the larger picture of the real world.

    Also, in my experience, especially with such “off the shelf” packages, the code for the PDE is generally a tiny portion of the complete code. Most of the coding would be for the “nice user GUI”, visualisation issues, and often to have “geometries”. For example, solving the heat equation on a square steel plate is quite easy. Drilling a hole in that plate, or “notching a corner” very much complicates the code for introducing boundary conditions and the “layout/discretisation” matters. However, the PDE remains the same. The PDE solver can remain the same too. Though we would at least apply a little better matrix algebra to the Ax=b side to improve computational efficiency. Still, the point should be clear, the code for simpler elliptic problems may require a couple thousand lines, while the code for the “nice GUI”, geometry discretisation, visualisation, may require 30,000+ lines of code.

    Once again, not having examined the code in deep detail, I reserve the right to revise my statements, but almost surely, even if I had the time, Nag is unlikely to release the source to me.

    Finally, on all of the occasions we purchased highly specialised “math solvers” from “vendors”, they turned out be much more hype than substance.

    In short, off the shelf commercial solvers compete with many free ones, and most of the difference in the “paid for” ones tends to be GUI, visualisation, and “layout” issues. To be the “project leader” for such a development is not in any way necessary or sufficient proof of deep, or for that matter any, knowledge of PDE’s etc.

    Again, Mr Stokes’ claims should be taken with at least a grain of salt.

    2) Mr Stokes’ presentation of his time at Nag etc seems to be some sort of claim that his modelling/math was in a “commercial setting”, which I think all agree is a very much tougher and demanding environment. However, I take exception with Mr Stokes’ claim that a position at Nag or anything like it (e.g. software vendor), while commercial, is NOT actually any kind of “frontline” math/modelling, and crucially, “software vendors” DON’T HAVE SKIN IN THE GAME.

    If an engineer designs a bridge relying on a buggy Nag routine, and the bridge collapses, killing many and loosing much investment … who goes to prison? The engineer, or the library vendor/Nag etc? I am pretty sure it’s the engineer, not Mr Nick Stokes and his Nag library.

    … if you have not ever had real skin in the game, your claims of this sort are an insult to those who have. Just like, anybody who claims to have been in combat or know what combat is like, but has not actually been (even if they served in the military) knows NOTHING about combat, and is insulting anyone who has been.

    I could go on, but I hope the point is clear.

    3) The Ancestry Bizarreness/Dishonesty

    Mr Stokes cherry picked a statement and produced the following:

    ” “Stokes Theorem has absolutely no relation to you (Nick Stokes). “
    Not entirely. Sir George Gabriel’s great grandfather was an ancestor of mine.”

    What a colossal bit of dishonesty, and quite frankly a bit of an own goal.

    The cherry picked statement by Mr Nick Stokes is from a paragraph that CANNOT be taken to mean anything other than a comment about Stokes Theorem. That is, a comment about a theorem, not a person.

    … I am, of course, taking Mr Nick Stokes’ word that his claim of ancestry is in fact true. However, then:

    What is Mr Nick Stokes up to? Is he trying to say my remark was wrong, clearly it cannot be wrong, since Nick Stokes has NOTHING TO DO with STOKES’ THEOREM. Unless Mr Nick Stokes has a time machine or is hundreds of years old, how could he have any influence on a theorem long before his birth. There is no indications anywhere that I can find that Mr Nick Stokes has made any contribution to this part of vector calculus/differential topology and Stokes Theorem. If he has, then present that, not some wishy washy misdirection.

    I find it somewhat sad and bizarre that Mr Nick Stokes would even attempt to make or imply some sort of a connection between his mathematical abilities, such as they are, compared to his long dead and very far removed ancestor.

    Indeed, genetics has long proven that ancestry is NOT a particularly good indicator of important traits and abilities. There are perfectly normal/average people who give birth to several children, one of who is a genius. Similarly, perfectly normal parents sometimes give birth to serial killers, or Adolph Hitler, or whoever.

    Too weird for me. Clearly, Stokes Theorem is NOT Nick Stokes, and while he might wish to point to ancestry, he should not try to use that to make a lie. He could have simply stated his ancestry and agreed that he has nothing to do with Stokes Theorem, and then honestly represented the facts. But no, that’s not how our Mr Nick Stokes rolls.

    … anyway, that’s it for me as far as Mr Nick Stokes is concerned.

    … once again, my apologies to all, but I was forced to this.

  5. Nick Stokes says:

    DrO is evidently proud of his doctoral status, which he seems to wish to contrast with mine. But I have a PhD too. Again, I’ll deal with just a few of the distractions. The main one is all the PDE talk. I have said that there is no PDE involved. It’s just integration of observed surface temperature. There is no need for extended expostulation on the matter. If you think a PDE is being solved, what is it?

    In fact it is just a routine use of the FEM. which represents functions specified at points in a space of (usually) continuous functions spanned by basis functions, being simple polynomials on the elements. That is all that is being done here. The representation is then integrated. No issues of boundary integration, at element boundaries or anywhere else. They come in with PDE when you have second derivatives being integrated by parts. That isn’t happening here.

    I was puzzled by the quadratic claim. It now seems that it was based on the quadratic being the solution of the ordinary de y”=0. But the nearest 2D or 3D equivalent of that is Laplace’s equation, and that does not have just quadratic solutions. In fact, in 2D, the real or imaginary part of any analytic function is a solution (including a constant).

    The issue of satellites and surface temperatures being measured in different places – well, it’s just obvious. In the old days when John Christy at UAH posted daily TLT global temperatures, they were typically about -26C. Surface is generally reckoned to be about 14C. There is no reason to expect that surface temperatures should track TLT. But sometimes they do. Here is a plot (from here) of UAH compared with GISS (and GISS from 2011) to end 2015. Yes, UAH V6 deviates in trhe way described after 2005. But UAHV5.6 was far closer to GISS than to V6. So much for satellites “contradicting” the surface indices.

    On Fastflo and NAG – I was never employed at NAG, and did not use NAG libraries. I wrote the code at CSIRO. NAG acted as distributor. here is a Wayback archive (stripped of graphics) of a 1998 numerical simulations page from NAG.

    • Clive Best says:

      Sorry for being out of the loop. I am on Heron Island and only have 30 mins of slow internet per day.

      I should be back in circulation tomorrow evening.

      Let’s keep it civil till then 😉

      • Nick Stokes says:

        Wishing you luck with Cyclone Debbie. Hopefully you’ll be well south of the worst.

        • Clive Best says:

          I was wondering why it was getting windy!

          I need to learn WebGL. I have a nice IDL 3-d plot of spherical triangulation but it has taken over 80 hours to calculate !

          • Nick Stokes says:

            Clive,
            Yes, WebGL is a marvel, but programming it can be a pain. I’ve found it useful to set up boilerplate code for a sphere, so that I can just concentrate on the problem specific aspects (data). I’ve described the system here; I’ll be posting a new version soon.

Leave a Reply