Climate Models compared to Hadcrut4

Estimates of future climate change are based on the mean ‘projection’ across an ensemble of models. This is based on an assumption that each model is equally likely, but is that really true?  Surely it is far more likely that just one of the models is nearly “correct” and that the  others are simply wrong. This is normally the way physics progresses through a “selection of the fittest” process.

We look first at the most sensitive model in CMIP5 which is GFDL-CM3 developed by the Geophysical Fluid Dynamics Laboratory at Princeton. It has an equilibrium climate sensitivity(ECS) of >4C. This model already disagrees strongly with the latest Hadcrut4 temperature distributions as shown below. The plot shows  the distribution of temperature anomalies relative to a 1961-1990 normal for  GFDL-CM3 compared with that observed by Hadcrut4. The data is averaged over all months during the 4 year period 2011-2014.


There is a huge disagreement over most of the globe. North America, South America and the whole of ASIA are notably cooler than predicted by the model. Ocean temperatures are also much cooler. There are very few stations with continuous records in Africa to say very much and this remains a problem with all datasets. My conclusion is that GFDL-C3 is simply wrong.  Next we do the same thing  using the lowest sensitivity model in the ensemble – GISS-E2-R which was developed by NASA. GISS-E2-R has a value of ECS ~2C.


This model shows  better general agreement although the details don’t match perfectly  either. My conclusions of all this is

  1. The data favour low sensitivity models.
  2. High sensitivity models are ruled out.
  3. The coverage in the world’s largest continent Africa is very poor.  There are very few continuous monthly records available in Africa.
  4. Antarctica is poorly monitored.
  5. The upper limit on equilibrium climate sensitivity should really be lowered to ~3C.

You can see an animation the full Hadcrut4 yearly development of temperature anomalies  here since it is too big to insert directly in this page. There you can see how the observed coverage varies with time. Only stations with a 12 month record are included for any given year.


This entry was posted in AGW, Climate Change, climate science, GCM, Institiutions, IPCC, NOAA, Science and tagged , . Bookmark the permalink.

23 Responses to Climate Models compared to Hadcrut4

  1. A C Osborn says:

    Considering that the latest sets of adjustments to Hadcrut (and GISS) have tried to bring it more in line with the model output it looks like they still have a way to go yet.

  2. Ron Clutz says:

    An analysis of 42 CMIP5 models’ temperature projections led to INMCM4 as the best of the bunch.

  3. DrO says:

    A C Osborn is on the right track. When you must apply massive “administrative adjustments” to the (land based) temp records just to force the “data” to “agree” with the “models” … there is something seriously wrong.

    There are many other profound and foundational problems too. Just a few to consider are:

    1) The entire premise of the article is flawed. Just because you have an “ensemble” there is NO REASON (so far as one can tell) that any of the models should be “correct” or even “near correct”.

    Just because you use a shotgun, that does not in itself guarantee hitting the target, or even being “close” in any important sense.

    2) The reliance, in the manner used by the IPCC et al, on ensembles is more of a “lawyer’s trick” than math/science. As illustrated on earlier occasions, the IPCC style ensembles are purely to trick the public to give the “appearance” that something “useful” has come of their models, and that then their “projections” should be believed (when it could be nothing further from the truth).

    Incidentally, and although I provide a more technical explanation of the why’s/wherefores of ensembles in a monograph I have yet to complete, and have provided the “ensemble an air-plane with a submarine to model a car” metaphor and the like previously, a few additional comments on “IPCC ensembles” may be worthwhile:

    a) If you look at your Figure 1 here (, how come each and every simulation has a vastly different initial condition? Surely, all “projections” should start at a single IC, and notice the massive spread in the IC’s (3C! that’s 3 – 5 times the entire warming of the 20th century)

    b) How come there is no out-of-sample verification?

    c) How come the comparisons exclude the known to be “un-fiddled” satellite data, and rely only on guessed or fiddled data?

    … and so on, and so on.

    Most important of all, the real point of perturbation analysis is test for model stability. HOWEVER, the climate is KNOWN to be unstable.

    Therefore, a proper “ensemble” may expect to have bits going all over the place, reflecting instability. Since there is NO WAY KNOWN TO MAN to model this type of fundamentally unstable phenomenon in general, there is NO WAY WHATSOEVER one can make any statement whatsoever about the “correctness” of one or another simulation, as things stand.

    Even if we had some model equations that were in some sense “acceptable”, we don’t even have the data to make the most basic assessments. For example, to assess state space entities (attractors, etc) and their properties (fractal dimension, Lyapunov exponents, etc etc) requires many (tens of) thousands of data points, which we don’t have.

    … it is a “trick” to introduce ensembles to then allow the “leap of faith” that then the (guessed) “mean” or trend and some huge (guessed) variance is in any way a meaningful “projection” … and certainly a trick to then rely on that to force a political policy on the planet’s population based on that nonsense, claimed to be science.

    3) It is also of EXTREME interest that the IPCC et al continue to “invent” their own language for such things. For example, they now make “projections”, rather than “forecasts” or “predictions”, and have provided a highly proprietary definition of their words. This is purely to overcome the failure of their methods using proper methods/language … so, as with everything else, they simply “change the data”.

    … when the “Norm” is to fiddle the data and meaning of words, we are in realm of creations by lawyers/politicians, not scientists.

    • DrO says:

      oops, I should have also added, as I have commented in the past, that usage of the word “equilibrium” suggests a deep problem. There is, tautologically, no chance whatsoever that the climate can be in “equilibrium”. At the most, one might hope for a stable “steady-state”, though in reality it looks almost surely to be an unstable steady-state with a complex state space. This is not merely semantics. The ramifications for systems that can attain equilibrium vs. those that can at most attain steady-state, are profound.

      • Clive Best says:

        Yes you’re right there is never any equilibrium in the earth’s climate. Solar forcing varies every day as the earth orbits the sun. This drives seasional changes in climate. Over long periods this seasonal change can vary dramaitcally with the closest approach to the sun (currently Southern Hemisphere) espacially with maxima in arctic regions. Ice ages end when Northen hemisphere maximum summer insoilation coincides with a nearest aproach to the sun.

        • DrO says:

          I am not sure if we are really saying the same thing … we might be.

          Equilibrium vs steady state does not require solar or orbital variation. It is a property of the system under study. Strictly speaking, only “closed systems” have a chance to achieve pure equilibrium, and even then it is not guaranteed.

          However, much much more important is that open system, where a “stable” steady state might be the most one could look for, can also exhibit all manner of instabilities (both bounded and unbounded), including importantly, aperiodic and other such phenomenon.

          A lava lamp in dark room with constant power supply, and at steady state, does not have any “external variability”, yet its “internals” are decidedly “unstable/stable steady state” (i.e. aperiodic).

          So, let’s see if the many armies of climate modellers with mountains of supercomputers could predict what a lava lamp would be doing even in a few weeks, never mind 100 years … I don’t think so.

          How about something simpler still, like the simplest Hele-Shaw cell … again, no chance whatsoever to make any meaningful predictions, and here there is not even a question of equilibrium vs. steady-state.

          The planet would, almost surely, exhibit aperiodic behaviour even if there was no solar or orbital variation for the same fundamental reasons the lava lamp, Hele-Shaw cell, etc do, except now exponentially more so.

          Under such conditions, prediction is, for practical purposes (or at least for more than the shortest time horizon, and certainly in the IPCC context), impossible.

          Being clear about steady states and their properties is the fundamental issue in this topic, and required to put stop to a huge amount of nonsense.

          • clivebest says:

            Yes the climate is an open system which will never in thermal equilibrium. It can perhaps be viewed as reaching a long term ‘steady state’ if net energy flows are in overall balance within that time period. However that time period has to be larger than the maximum wavelength of natural internal variability.

    • Clive Best says:


      I am really just following the party line in order to make the point that extreme warming is ruled out by the data. Yes there is a trend to correct weather station measurements seemingly to reinforce the ‘message’. It may be deliberate or to be kinder it may be ‘subconscious’. If your status and funding is on the line to promote a looming crisis then this must affect your judgement.

      • DrO says:

        In fact, extreme warming (whatever that may be) is NOT ruled out. We simply CANNOT say anything about what planet’s climate will be doing within the “context” of the IPCC et al framework.

        The point is that NO model can be of any assistance here, even if you had a proper model (which they don’t) and proper data (which is not even possible for many decades yet). Again, this is a fundamental property of the universe. Some things are predictable, some things are not: The climate is not (in the IPCC context)!

        The so-called “administrative adjustments” in the land based temp data are quite clearly deliberate, as they have said as much. They have come up with all manner of hand waving arguments as to why they must apply “manual” adjustments. Unfortunately, on closer scrutiny, it quickly becomes clear that the “administrative adjustments” are a slight of hand required for political purposes, even when contradicted by the facts. I can expand on this another time, but it is quite clearly the case.

        However, that is not the only data manipulation. There are huge amounts of data that are abused. I’ve have shown you in the past real satellite data that contradicts/crushes the model assumptions about “solar in/out” by about 20 W/m^2, in models where, according them, the “sky is falling” at 2 – 5 W/M^2. Indeed, their models have the WRONG SIGN on this.

        The “atmospheric scrubbing calibrations” are another example of fiddling the data.

        and so on, and so on.

        … so I must object loudly to the “explanation” that this is the practice because their funding depends on it.

        … though, indeed, I believe you are correct, but then … the “jig is up”, and one thing you can’t call it is “science”, and we should be putting a stop to their attempts to pretend that it is science.

        … it is most objectionable to then use “fear mongering” on the basis of lies and rubbish to scare the crap out of the population and to cheat them out of their votes, taxes, etc.

        Giving any suggestion, or “playing along” with rubbish only reinforces the rubbish in the minds of some … why not just be scientific about it (e.g. honest).

        It is necessary for them to masquerade as scientists, since sociologists have demonstrated that scientists have (or used to have) the highest credibility, and politician’s about the lowest. It is for this reason that most political activism since the 1960’s has generally masqueraded as “science” … since if they were honest, and admitted it was a political matter, nobody would pay attention (or vote, or send funds, etc).

        … let’s put an end to it … if that means some math geeks loose their funding … so be it (of course, being a math geek myself I too must accept this) … it is not just the scientific thing to do, it is the “just” thing to do.

  4. DrO says:


    In respect of your comment “It can perhaps be viewed as reaching a long term ‘steady state’ …”, that is just not correct.

    In a critical sense, the entire point of stability and state space analyses is to establish the NATURE of the steady-state. One can have “unstable/stable” steady-states … that is exactly what chaotic systems demonstrate.

    In that case, it is utterly meaningless to speak of maximum wavelength being “steady state”. One way to demonstrate this is to examine the Fourier series (FS) of stable (e.g. periodic) systems compared to aperiodic (i.e. chaotic) systems.

    With stable/periodic systems, almost surely, there will be (a relatively) few wavelengths composing the FS representation.

    Similarly, performing a Taylor series expansion will also result with a (relatively) few terms in the expansion, and they will be the lower order terms (assuming the system is even “analytic”, but that discussion becomes very technical very quickly, so will save that for later).

    Crucially, with stable etc systems, the importance of the higher order terms in the FS or Taylor series becomes (often) exponentially decreasing.

    … this is also why Finite Difference, Finite Element, etc methods can succeed also for stable systems (i.e. for obvious practical reasons, all those methods must truncate the Taylor expansion to the “long wavelength” components only, and thus throw away/ignore the rest of the infinite series/higher order terms.

    With aperiodic systems, exactly the opposite is true. Here, the higher order terms become increasingly more important. In short, you really do need an infinite length expansion, as any truncation “throws the baby out with the bathwater”. Put differently, crudely speaking, the short wavelength components become the more important bits.

    Even more crucial for the IPCC context is that their horizons are quite short, compared to any “long wavelength” dynamics in the planet’s machinations. Thus, by their own choice of forecast horizon, they can be successful only if/when modelling the bits with the shortest wavelengths anyway.

    The meaning of steady state is rather more complex than just thinking of max wavelength, and very poorly understood by a vast number of speakers on this subject.

    … if you email to me directly your email address, I think I can arrange for a complimentary book that provides some basic mathematics on some of these issues.

    • Clive Best says:

      If the climate really is aperiodic (chaotic) then I guess anything could happen. Climate models then don’t make any sense at all in such a case. In that sense ice ages and interglacials could be underlying strange attractors and the climate flip flops between these two different phases triggered by Milakovitch orbital changes. Certainly no-one can explain the 100y eccentricity cycle simply based on changes in insolation. That would support some sort of underlying non-linearity.

      That is if I understand you correctly.

  5. profmicawber says:

    Clive says “Estimates of future climate change are based on the mean ‘projection’ across an ensemble of models. This is based on an assumption that each model is equally likely, but is that really true? Surely it is far more likely that just one of the models is nearly “correct” and that the others are simply wrong. This is normally the way physics progresses through a “selection of the fittest” process.”
    Scientific method depends on experimental verification of theories and models. The problem is that none of the GCM models have the correct physics. That is why all are wrong. Mathematics is the tool of physics. If the basic physics is incorrect, no amount of adjustments or statistical tweaking can make it right.
    Note that Gavin Schmitt, Director of NASA Goddard at Columbia University, said in his Carbon Brief interview that his background is confined to pure and applied mathematics. There is no evidence of classical physics, so necessary to understand ocean and atmosphere dynamics. (
    Moreover, it is amazing that arguments still focus on atmospheric datasets. Anthropogenic climate change studies have concentrated on the trivial 7% of warming in the atmosphere and the even more trivial data over the 30% of land surface.
    The problem with the HadCru dataset discrepancies was dealt with through in situ field experiments in the tropical Pacific. The problem was thought to lie in the methods of seawater surface temperature measurement in the mid-Pacific during the mid-20th century. The experiment measured hourly temperature and salinity at the surface and three metres between Tahiti and Hawaii.
    (Matthews, J. B. R., 2012, Comparing historical and modern methods of Sea Surface Temperature measurement – Part 1: Review of methods, field comparisons and dataset adjustments, Ocean Sci. Discuss., 9, 2951-2974, doi:10.5194/osd-9-2951-2012.
    Matthews, J. B. R. and Matthews, J. B., 2012, Comparing historical and modern methods of Sea Surface Temperature measurement – Part 2: Field comparison in the Central Tropical Pacific, Ocean Sci. Discuss., 9, 2975-3019, doi:10.5194/osd-9-2975-2012.)
    The basic assumptions that the top 10m of oceans are well-mixed and of uniform temperature were found to be untrue. There is a strong daily cycle with temperature differences between surface (0.5m) and 2m with a mean of 0.3C but as high as 1C. The discrepancy between datasets was 0.3C. Met datasets are not accurate enough for this to be significant. The invented excuse that it was due to engine room warming of seawater intake measurements does not bear even the most cursory examination of basic physics. Water temperatures measure 1m inside a 30 cm pipe moving at 1m/s cannot possibly heat up in a 40C engine room. Water radiators have huge surface areas to compensate for 3000 times large heat capacity of water. If it worked like supposed engine room warming, we would turn our room radiators on in hot summer days to cool them down! Amazingly, the engine room corrections is in textbooks of corrections for standard oceanographic data processing.
    Voluntary Ship Program meteorological measurements measured SST at unknown depths to only half or 1 degree C since 1955. Measurements should be that much more accurate to have any significance for dynamic models. Even worse salinity is never measured. Seawater density depends on both temperature and salinity. There is no way to compute thermohaline circulation without knowing both surface and subsurface density. Satellites cannot measure these parameters at 1m , 3m or 10m depths. There is no way that thermohaline convection can be computed. That is a basic reason why models fail.
    The dataset problem is not with the HadCru land-air dataset but with air-sea portion. Phil Jones was castigated for his meticulous work in the Climategate furore. The real culprits are based at the University of Southampton, not at the University of East Anglia. Phil was so concerned with his attackers that he did not appreciate the work reported from the mid-Pacific when it was presented as an undergrad thesis under his guidance.
    The authors subsequently showed that the use of evaporation over land, dependent on windspeed and relative humidity, did not apply at sea where Clausius-Clapeyron exponential temperature dependence applies. The long-term implications were investigated in the continuous daily sea surface data since 1904 at Isle of Man where the authors lived. These were presented as a pair of papers to Ocean Science Discussions. The first was rejected by the modeller Editors because their belief in the wrong physics used in models was too strong. The second that showed Isle of Man data suggested that warm tropical waters travelled to the Arctic, basal melted floating ice and appeared in SST data as pulses of cold and warm waters. It showed clearly that the mid-twentieth century cooling that they originally investigated was simply due tropical heating during the 400-year solar maximum, melting record volumes of Arctic subsurface ice resulting in net cooling. This created strong resistance from the Anonymous Reviewers with their strong beliefs in established physics incorporated into models. Fortunately, the format of Ocean Science Discussion allows open comments on printed papers for a period of a month or so. The authors used this to record the reviewers and author responses for the permanent record. This is the most useful part of the online journal. The accepted articles in Ocean Science look nice, neat and settled. The discussions show the real struggle of ideas of established science versus experimental evidence to the contrary.
    One of the most alarming revelations was that belief in models was so strong that models results were used to change field observations. The author comments relating to this are reprinted below in relation to the Isle of Man paper.:
    Model Belief versus Scientific Truth
    Belief in models is sufficiently advanced that they are used to change scientific ground truth observations. Science is based on experimental verification of models, theories and statistical assumptions. Zealous belief in models is the reverse of good science. Therefore, science has been abandoned in favour of non-science or nonsense.
    Aerosols and precipitation
    Recent statements in support of aerosol/precipitation model runs are a good example (Osborne and Lambert, 2014). The authors say in a promotional website “Climate models can show observations to be wrong, University of Exeter” (, 7 April 2014;, 8 April 2014). The authors propose altering mid-century precipitation data to match model-computed values! This absurd suggestion is completely contrary to basic cloud physics. The connection between aerosols and precipitation is very tenuous. There are no models that can capture the relationship. Another posting on 20 March 2014 suggests that climate modelling of Arctic and Antarctic sea ice warming is “science at its best”. Models are not science. They are tools just like pencils. Garbage in, Garbage out is the rule. Belief that models are science is wrong.
    Aerosols from sea salt or desert sand form condensation nuclei. Particulates shield incoming radiation as shown during USA 11 September 2001 3-day aircraft ban. While evaporation is uniform over large isothermal areas, precipitation is highly variable. Cloud seeding and aerosol experiments showed precipitation is complex and unpredictable. The work of friends and colleagues Louis Battan (C193), Peter Hobbs (instrumented aircraft), and of our mentor B. J Mason (1950s England) are all examples of experimental ground truth experiments. To ignore hard-won ground truth data and actually alter raw data is completely inexcusably wrong. It is bad science at its worst.
    Altered ground truth data removes important physical processes
    Two Wrongs do not add up to a Right. Osborne and Lambert (2014) cite the alteration of mid- century Pacific sea surface temperature data to fit statistics as precedent for altering raw data to fit models. It ignores our experimental verification that mid-century mid-Pacific sea surface temperature datasets are demonstrably wrong (Matthews and Matthews, doi: 10.5194/os-9-695-2013, 2013; It is completely unscientific and absurd to alter raw observational data. The three phases of global warming reported here were completely removed from the record. Models prove nothing as we stated in our earlier author response to an anonymous reviewer ( Observations may be inaccurate. That suggests the need for better-calibrated instruments and measurement by well-qualified scientific observers as Keeling (1998) found. We pointed out that the mistaken belief in models in complete disregard of actual observations and the use of calibrated peer-reviews was the major problem (http://www.ocean-sci-
    Evaporation is the key to understanding precipitation
    Our unique mid-Pacific ground truth experiment, Editor-withheld companion paper, established that evaporation depends only on temperature (through the Clausius-Clapeyron relation) and not relative humidity or windspeed (See also C54). No models discovered this scientific truth. It was found by in situ observations by highly qualified scientists with calibrated instruments.
    The top 2m of the ocean should now be the focus of climate change studies. All the essential processes of global warming and ocean acidification are in this layer. Yet it is almost completely unstudied. Our finding that warming is accelerating from its current rate of more than 1ºC in twenty years makes further research vitally important.
    Trust ocean observations. Ignore models
    Our papers serve as a warning to the ocean science community they must avoid following climate research down the blind alley of altering data to fit models. The only truly scientific approach is to find more high quality scientific observation data. In the ocean we have much better tracers of processes than aerosols. Global warming studies have concentrated on temperature to the exclusion of salinity. However, we are blessed with other chemical and biological tracers for ocean near-surface processes. We present some further supplementary ground truth data in support of the processes reported in the discussion paper (MP).
    Scientific tracers for Ocean Surface Processes
    1 Biological
    Don Williamson (1956) was able to trace specific water masses in western Irish Sea fjord by unique plankton as we discussed (MP). The species grew in the brackish waters of the eastern Irish Sea and was distinct from species from the southern St George’s Channel or North Channel. It was found 120 miles (145km) to the north in Loch Fynne, the last 30 miles (48km) of which is a cul-de-sac, and also 200 miles north in the Minches open sea channel. Williamson states “Only a sudden and temporary increase in the rate of flow from the Isle of Man area could have produced a marked and simultaneous increase in the numbers of all three species in the north-going water by producing a faster moving body of water which would mix less with surrounding water in a given distance and so maintain its planktonic character for a greater distance.” Furthermore, “In both cases the Irish Sea origin of the water seems highly improbable if the rate of transport were only of the order of Bowden’s figure (about ¼ mile (400 m) per day), but much more probable if the water were transported in pulses travelling at many times this rate.” We believe this proves the value of observations of planktonic character in tracing Lagrangian wind-driven coherent surface water masses. Highly qualified scientists can only do this from the surface. There is no possibility of doing this work from satellites or even worse by application of statistics or models.
    2 Chemical Tracers
    Salinity and nutrients have been totally neglected in tracing surface water masses for climate change purposes but are essential to understanding processes. This is especially vital for ocean acidification processes. John Slinn, who routinely collected daily Port Erin sea surface samples, did extensive studies of temperature, salinity and nutrients. He states, “The salinity pattern in June 1955 may be interpreted in support of Williamson’s (1956) suggestion that the flow through the Irish Sea is irregular. There is evidence that Atlantic water, although considerably diluted by fresh water run-off, penetrates southwards off the Irish coast” (Slinn, 1974). Indeed he reports an unusually high salinity of 34.5‰ in May 1959, at the height of the solar maximum water that we reported in the reviewed paper (MP). Indeed he reports annual high salinity intrusions of North Atlantic from the North Channel occur in the western fjord “biased towards the Isle of Man”. They occur in March and November and follow that pattern shown in our Figure 5b (MP). It is clear therefore, that intrusions of Gulf Stream and Labrador surface water follow an annual cycle.
    We believe it is entirely consistent with its tropical origins that unusually high salinity surface water first became apparent in routine samples at Port Erin in March 1959. That was during the peak of the 400-year solar maximum irradiance/sunspot cycles.
    Slinn provides further confirmation of our 1959/1963 hot/cold tropical heating/polar cooling processes. He also notes “In March 1966, when the winter state of vertical homogeneity might have been expected to prevail, conditions were unusual in that there was a well defined temperature discontinuity over much of the section with the colder water at the surface. This colder water was markedly less saline than that below, with a salinity difference of almost 1‰ between surface and bottom in mid-section.” Furthermore, “It may also be noted that salinities of over 34.9‰, at the bottom of the deep trough are the highest values ever recorded for this part of the Irish Sea.” We suggest the high salinity bottom water is remnant of the earlier years record warm salty intrusions. The fjord sills trap them in the deep channel off Port Erin. Surface waters are from the ongoing polar icemelt. Slinn goes on to associate nutrient depletion in the upper 30m water column with the observed stratification.
    Gulf Stream High Salinity Intrusions
    Tropical high salinity seawater had been observed in studies of tide pools on the southeast of the Isle of Man (Naylor and Slinn, 1958). These intrusions were reported to be due to sustained strong onshore winds. For example, on 12 May 1953 seawater, of salinity 34.3‰, entered the lowest pool but had little effect on the upper pool where the water was shallower. There, during neap tides, the “salinity rose from 36 0‰ to 38.6‰ in the six hours after high water, which occurred at 1100h GMT”. A temperature rise from 15.5ºC to 22.2ºC over a period of six hours was recorded. This is a clear demonstration of temperature dependent evaporation producing high salinity water in a shallow evaporative basin. We observed similar high evaporation in high salinity (>35.5‰) high temperature (>28ºC) water in the north Pacific in the first part of these companion papers. It is clear proof of the Clausius- Clapeyron evaporation temperature dependence rather than the usual wrong assumption of windspeed and relative humidity.
    Ocean Surface Science
    Failure to publish the two companion papers suggests ocean scientists endorse the bad practices of climatologists. Copernicus Ocean Science has a great opportunity to the world’s leading forum for honest open discussion for scientific verification studies of the top two metres of oceans. We believe they should be without anonymity and with vested interests fully disclosed. Without this science will become nothing more than a political pawn.
    Please stop discussing models and statistics as science. They are tools, nothing more. They are subservient to scientific method that fundamentally depends on experimental ground truth. This can only be gathered in situ because satellites cannot provide subsurface data on physical, chemical or biological tracers. It requires a completely new focus on the top 2m.
    Naylor, E., and Slinn, D. J.: Observations on the Ecology of Some Brackish Water Organisms in Pools at Scarlett Point, Isle of Man, Journal of Animal Ecology, 27(1), 15-25, 1958.
    Osborne, J., and Lambert, H.: The missing aerosol response in twentieth century mid-latitude precipitation observations, Nature Climate Change, doi: 10.1038/nclimate2173, 2014.
    Slinn, D. J. Water circulation and nutrients in the northwest Irish Sea, Est. Coastal Mar. Sci. 2, 1-25, doi: 10.1016/0302-3524(74)90024-3, 1974.
    Published as Author comments on Anonymous Reviews ref C238 on:
    Possible signals of poleward surface ocean heat transport, of Arctic basal ice melt, and of the twentieth century solar maximum in the 1904-2012 Isle of Man daily timeseries Matthews J.B. & Matthews J. B. R., 2014 Ocean Sci Discuss, 11(1) DOI: 10.5194/osd-11-47-2014

    In summary, we need in situ field data in the top few metres of oceans to get closer to the scientific truth. Continuous coastal daily or hourly timeseries provide a relatively inexpensive approach to correctly formulating and verifying models. That is the scientific method. Each location has unique position on the eleven interconnected surface gyres with unique impacts on extreme weather, local communities, ecosystems.
    The analyses so far all agree that ocean surface warming is proceeding exponentially in halving time increments. With the north Pacific above long-term means by +3C in 2014, the authors suggest +4C by 2016 at current rates. Is it correcte? We know the models are definitely wrong.

    • Clive Best says:

      Thanks for that detailed post. There are some very interesting points you make and it sounds like you also have some inside knowledge. I have studied the land temperatures from weather stations and had simply just assumed that the SST data were correct. It was only when I plotted the spatial distribution of ocean temperatures that I realised just how poor the ocean coverage really is lasting well into the second half of the 20th century. There are essentially only a few shipping lanes that are well covered.

      The basic assumptions that the top 10m of oceans are well-mixed and of uniform temperature were found to be untrue. There is a strong daily cycle with temperature differences between surface (0.5m) and 2m with a mean of 0.3C but as high as 1C.

      This means that time of day is just as important as month! Ships are moving and take measurements as and when they pass by. To make matters worse water is always moving as well. Currents move heat around the oceans. The mixing with depth depends on the weather. If seas are very calm the surface will be warmer than if seas are stormy. etc. etc.

      Completely agree about the uncertain role of aerosols in clud formation and preciptitation. The tendency to correct measuremenst to agree with models is unbelievably dangerous let alone arrogant.

      Please stop discussing models and statistics as science. They are tools, nothing more. They are subservient to scientific method that fundamentally depends on experimental ground truth. This can only be gathered in situ because satellites cannot provide subsurface data on physical, chemical or biological tracers.

      I completely agree!

  6. Pingback: Climate Models compared to Hadcrut4 by Clive Best | Climate Collections

  7. Hans Erren says:

    Hello Clive, I updated the cmip5 uah rss lower troposphere temperature graph of roy spencer. As you are familiar in the cmip5 result suite, may i ask you kindly if you can reproduce the graph of spencer using the cmip5 lower troposphere results?

    • Clive Best says:


      Shown below is my comparison of Hadcrut4, RSS and UAH to CMIP5 models. The basic problem with thios type of compariosn is to normalise all the temperature anomaly data to the same time period. I used 1961-1990 for CMIP5 since this is the same period as Hadcrut4. Therefore when comparing RSS and UAH to this I scale up the zero line by about 0.25C since their normalisation period is something like 1978-2010. The agreement between all data sets is not bad. Others have noted that the trend is less in the satellite data than the Hadcrut4 data ( see climate4you). I also show a smoothed avregad trend acrodd all 3 datasets in Magenta. There vis no doubt that the trend since 1978 is less than nearly all model results.

  8. Pingback: Il mondo cambia più del clima | Climatemonitor

  9. Jerry says:

    Thank you for your efforts in maintaining and updating your web site. It is quite refreshing to see someone taking a scientific approach to the “climate change” discussion. The comments are also much more in line with what I would expect from a scientific discussion with the desired outcome to be a closer approximation of a truly scientific model. Thanks to one of your commenters I found a recent interview of NASA’s Dr. Gavin Schmidt and to be frank it scared the heck out of me. First off this man refers to himself as a scientist and his background is entirely mathematics. I don’t see any scientific training that he has undergone. Also the way he discusses topics is openly stated as opinion rather than being fact based. His dismissal of data that does not match his personal opinion is, to me, blasphemous. And this man has a tremendous influence on government policy. NASA has definitely fallen a long ways in terms of scientific purity. It seems to be much more of a political organization focused on funding than in doing truly scientific research. Please keep up your work and again thank you and the people who you interact with in the comment section.

Leave a Reply to A C Osborn Cancel reply