Day of reckoning draws nearer for IPCC

Abstract: Global temperatures measured since 2005 are incompatible with the IPCC model predictions made in 2007 by WG1 in AR4. All subsequent temperature data from 2006 to 2011 lies between 1 and 6 standard deviations below the model predictions. The data show with > 90%  confidence level that the models have over-exaggerated global warming.

Background: In 200o an IPCC special report proposed several future economic scenarios each with a different CO2 emission profile. For the 2007 assessment report these scenarios were used to model predictions for future global temperatures. The results for each of the scenarios were then used to lobby governments. It would appear that as a result of these predictions, there is  one favoured scenario – namely B1 which alone is capable of limiting temperature rises to 2 degrees.

The Scenarios: These descriptions are taken from the SRES special report.

The A1 scenario is a case of rapid and successful economic development, in which regional average income per capita converge – current distinctions between “poor” and “rich” countries eventually dissolve. The transition to economic convergence results from advances in transport and communication technology, shifts in national policies on immigration and education, and international cooperation in the development of national and international institutions that enhance productivity growth and technology diffusion.

The A2 world has less international cooperation than the A1 or B1 worlds. People, ideas, and capital are less mobile so that technology diffuses more slowly than in the other scenario families. International disparities in productivity, and hence income per capita, are largely maintained or increased in absolute terms. People, ideas, and capital are less mobile so that technology diffuses more slowly than in the other scenario families. International disparities in productivity, and hence income per capita, are largely maintained or increased in absolute terms.

The central elements of the B1 future are a high level of environmental and social consciousness combined with a globally coherent approach to a more sustainable development (favoured by IPCC?).  Heightened environmental consciousness might be brought about by clear evidence that impacts of natural resource use, such as deforestation, soil depletion, over-fishing, and global and regional pollution, pose a serious threat to the continuation of human life on Earth. A strong welfare net prevents social exclusion on the basis of poverty.”

The consequent CO2 emission trends  which have been  simulated  for each scenario are shown  below.

Fig 1: CO2 levels for different scenarios

IPCC approved models were run on these scenarios using these predicted CO2 levels. As discussed before all IPCC models assume a strong positive feedback of water leading to amplifications of 100-200%. The resultant predictions over a 300 year period are shown below. These graphs undoubtedly helped influence political opinion to limit future warming to 2 degrees which implicitly supports scenario B1. Note also how scenario A2 explodes exponentially, presumably leading to the extinction of all life on Earth. This is the only scenario without some  world eco-governance body  and seemingly ends in disaster !

Figure 2: AR4 figure for long term predictions for each scenario

Basics: The main focus of this post is on  the technical summary of WG1 which contained specific short term  predictions using the same models  for warming up to 2030. This is important because good science makes testable predictions over realistic timescales. Otherwise it is not science at all but just dogma. The data used in  the original 2007 report was available only up to 2005. Since then we have had 6 more years of data which we can now confront with the model predictions. Meanwhile  CO2 levels have continued to rise in line with all scenarios (except that fixing levels at  2000).

The figure below shows the new data points plotted over the original figure that appeared in the WG1 report ( see here). The new black trend curve is a smoothed FFT fit through the data points. The results are startling.

Figure 3: TS figure from WP1 updated with the latest temperature data from HADCRUT3. The black curve is an FFT smooth through all points. Curves are (quoting TS): Multi-model means of surface warming (compared to the 1980–1999 base period) for the SRES scenarios A2 (red), A1B (green) and B1 (blue), shown as continuations of the 20th-century simulation. The latter two scenarios are continued beyond the year 2100 with forcing kept constant (committed climate change as it is defined in Box TS.9). An additional experiment, in which the forcing is kept at the year 2000 level is also shown (orange).

The trend speaks for itself. Predicted warming has not occurred and the actual temperatures are all more than one standard deviation below even the fixed 2000 CO2 levels (orange curve). All 6 annual temperatures lie below all scenario curves. The quoted error on a single measurement is 0.05 deg.C so we can now calculate the probability of these measurements  being a statistical fluctuation.

year sigma     probability
2006   1        0.32
2007   3        0.001
2008   4        0.0001
2009   2        0.04
2010   2        0.04
2011   6        <0.00001

The total probability that IPCC predictions are correct but the data points are just a fluctuation is vanishingly small ~ 10^-14 ! It is therefore possible to state with over 90% confidence that the IPCC 2007 model predictions are incorrect and exaggerate any warming. Will we have to wait another year for the 2012 data to be published before the IPCC admit that they have simply got it wrong ?

You can also download a poster about this post

This entry was posted in Climate Change, climate science and tagged , , . Bookmark the permalink.

78 Responses to Day of reckoning draws nearer for IPCC

  1. Pingback: Day of reckoning draws nearer for IPCC | Watts Up With That?

  2. Gary Hladik says:

    As I recall, the standard climate alarmist replies are (not that I find them convincing):

    1) The models describe long-term behavior, and the divergence of actual temps from projections hasn’t lasted long enough to be significant, e.g. they mention 30 years as a minimum.

    2) Projections by some models in the IPCC “ensemble” are within the error bars of measured temps, it’s just the ensemble mean that diverges.

    What have I left out?

    • Clive Best says:

      Gary Hladik: What have I left out? …..
      3) Models are untestable within the timespan of one scientific career ?

      • Barry Dwyer says:

        Models are untestable within one scientific career – this is an undeniable fact and is a refreshing comment. My 30 year career in the IT industry and understanding that some climate cycles are very long, tells me that climate models are likely to be untestable for a period of perhaps 100 years and maybe longer – at least 3 scientific careers. Add another 3 scientific careers after any significant change that is needed to the model.
        That makes them dangerous for policy makers.

        • Gerald m Lerner says:

          Not true. Several “theories” have already been proven wrong and others are on life support. The 1991 paper in Science by Friis-Christensen and Lassen was falsified within 15 years, dramatically so, the notion that the shorter solar cycles are (somehow) correlated with warming.

      • gerald m lerner says:

        I have to say that that “probability” calculation is the most naive and silly thing I’ve seen from someone with an advanced degree in science. It wouldn’t pass the laugh test in a Probability 1 course. You do understand that the six “yearly” measurements aren’t statistically independent samples? And the model bias for a six year period is huge for a phenomemon that increases, on average by about 0.01 degrees/year.
        Finally, I’ve never heard of the “falsification” condition for a scientific theory being expressed in years, some fraction of a scientists career.

        • Clive Best says:

          Yes you are correct . The probability assumes 6 independent measurements all being 1 – 3 sigma below the model prediction. This is too naive and A better statistical Analysis is driven in the comments. However the point is that all AR 4 predictions made in 2007 are statistically too high . So then they argue that trends are only meaningful over 20 years or so, and if we wait long enough the models will be vindicated. Hence the remark about CAGW only being falsifiable after the originators have retired.

          • Gerald m Lerner says:

            There’s far more to climate science than 30+ year land-sea temperature predictions. But that’s not my point. Falsification isn’t over some time frame, so the entire discussion is beside the point.

            The ” theory” is that anthropogenic increases of greenhouse gases is substantively altering the climate. How much over a time frame isn’t a test of the theory, but of model fidelity and assumptions.

  3. Boudumoon says:

    So, if it walks like a duck, quacks like a duck and looks like a duck then it is definitely catastrophic global warming.

  4. Roy Clark says:

    Great post. I fully agree with your analysis. The part that still amazes me is that the IPCC still gets away with these models. The reality is that there is no such thing as a ‘climate equlibrium state’ on any time scale. The pertubation analysis using equlibrium flux equations is invalid before it even gets started. The use of empirical radiative forcing constants is just plain fraud. The whole thing is just climate astrology.
    There is no such thing as a climate sensitivity to CO2. The sun, the wind and the oceans take care of the climate without any help from CO2.
    I have recently (self) published my own research results on Amazon in a little book called ‘The Dynamic Greenhouse Effect and the Climate Averaging Paradox’. It contains a proper, non-equilibium description of the greenhouse effect and the surface enrgy transfer. It also gives an overview of the global warming fraud.
    Further details are on my website – http//:www.venturaphotonics.com

    • John says:

      It [IPCC] gets away with these models because it [IPCC] exists only to provide legislators with a reason to legislate.
      The reasoning does not have to be accurate because the vasy majority of people believe what they are told.
      It is inconvenient that there are sources that not only inform people of that #IN#accuracy, but that those sources are increasingly being read and believed by people. In fact, taking over from the MSM which is now part of the problem.
      Nothing, I’m sure, that a little legislation cannot control.

    • Brian H says:

      Roy;
      Looking over your site; on the Global Warming page there’s a mashup of copy-paste errors here:
      “Historically, data on subsurface ocean temperatures has been very sparse. This has changed ocean temperature profiles and other data every 10 days. In general, as the sun warms the ocean temperature profiles and other data every 10 days. In general, as the sun warms the ocean during the spring and summer, a stable thermal gradient develops below a daily uniform
      mixing layer.”

  5. Vacslav says:

    … you can’t analyse time series without making assumptions about the nature of the noise. You can’t multiply probabilities as the noise can be (anti)autocorrelated. Standard deviations are not meaningful when the noise is non-standard.

    Observed temperatures do deviate from the IPCC predictions, but there is not enough information and/or substance in “statistical tests” to calculate the extent of this deviation.

    • Ben of Houston says:

      Valid point. Way back in school, my reactors professor told us something that resonates in my mind right now. We were discussing fluidized bed chemical reactors and how it was demonstratively impossible to model (there was an explosion of variables including catalyst stickiness and it quickly devolved via chaos). However, while it is impossible to model, it is possible to control via emperical models on long enough timescales because you had a general idea about what each variable did. He also told us to greatly beware short term changes and wrong-way initial reactions.

      However, whether this analysis proves that the IPCC models are wrong or just grossly oversimplifing the climate doesn’t really matter. Either way, the model predictions are invalidated.

      • Steve O says:

        Well said. In discussing the lack of recent warming with various alarmanistas I can tell you that they won’t be convinced. Their three main arguments are:

        1) 15 years is too short a period to be reliable
        2) There are known causes for the interruption in the warming trend, such as the decline of sunspot activity…
        3) Shut up! How many times do we have to explain all this to your ilk? Are you some kind of paid shill for the “fossil fuel industry?”

        We’ll see how the next few years turn out. I am hoping that the predictions from the Tibetan tree ring study are validated, because another 20 years of a cooling trend and it will all be history.

  6. richard telford says:

    It’s not clear what you are basing your standard deviation on, but it appears to be the multi model means. The multimodel means show substantially less variabity that the individual model runs. Since we only have one realisation of observed climate, it is more appropiate to compare this with the individual model runs. What you appear to have done is junk.

  7. Clive Best says:

    The probability argument is simply this :
    The quoted error on a single temperature anomaly measurement is 0.05 deg.C (see here). If you measure the shortfall between the 6 anomaly measurements and the lowest of the 3 scenarios – B1 then you find shortfalls of (1,3,4,2,2,6) standard deviations. If this was due to noise then one would expect +- 1 or 2 standard deviations at most. The probability that randomly all of them lie so far below the scenarios is the product of each probability. This leads to a very low probability that the scenario predictions are correct. This is important – because these very scenario curves have been used in long term predictions to argue for drastic curbing of carbon emissions to limit warming to 2 degrees.

    The question of model uncertainties: Here I think we have a different problem. Yes you are correct: the spread in model predictions seem to be getting larger leading to the statement that the data are within the spread of models. This may be factually correct but that fraction of model calculations still consistent with the data are just those with low feedbacks.

    A healthy scientific method is as follows:

    A theoretical model is developed to describe some physical process. The model will have a number of unknown parameters which determine the result. The values for these paramenters start with best guess values and the model then makes predictions of measurebale variablesfor experimentalists. Experiments then make the measurements and compare to the theory. The model are then either modified with new parameters which can better desribe the data or if this is not possible the model may be rejected.

    The problem with climate science seems to me that predictions of models made 22 years ago have had a massive political impact with the consequence that these predictions have been fixed in stone. This is not because the science has not evolved – it has. It is mainly due to the political fallout of being wrong. I accept the basic physics of AGW leading to ~1 degree warming for a doubling of CO2. The feedbacks however are rather uncertain. The models need to be tuned to fit the data and NOT the other way round.

    • Brian H says:

      “that fraction of model calculations still consistent with the data are just those with low feedbacks” … so the next step, which they won’t dare take, is to make a few runs with zero or negative feedback. Just, like, to bracket and give some perspective to the trooth, y’know?

      ;)

  8. bbbeard says:

    You should be using 1-tailed p values, since you are trying to demonstrate underrun, not just disagreement. But in any case you can’t just multiply the p values to find the likelihood of the observed trend. You need to compute chi-squared and then derive a p value. In the present case these are compensating errors (within a couple of orders of magnitude, anyway), but you really should do this correctly.

    In R, using the rest of your assumptions, this would look like

    dtemp <- c(1,3,4,2,2,6)
    1-pchisq(sum(dtemp^2),df=6)

    which gives a p value around 4E-13, about 40 times larger than the naive method, but still negligible.

    However, it's pretty clear that the error on individual data points is closer to the range of 0.08 to 0.10 degrees C, not 0.05. There are several ways of showing this, but the simplest way is to take the standard deviation of the residuals from a linear regression of, say, the last 15 years of actual temperature data. There is also the uncertainty of the trend line you are comparing to that must be included in the calculation. Suppose we use 0.10 as the uncertainty. This cuts in half the "sigma" values (which are actually Z values) you are using.

    In R we would write

    dtemp <- c(1,3,4,2,2,6)
    etemp <- dtemp/2 # to account for 0.10 degree uncertainty
    1-pchisq(sum(etemp^2),df=6)

    and this would provide a p value of 0.00761, or about 3/4 of 1%, which is still pretty small, but not 1E-14. But it shows the importance of using the correct statistic, and the importance of getting the uncertainty estimate right.

    • Clive Best says:

      Thanks for the expert analysis. I am sure your arguments hold more weight than my naive reasoning! We can agree then on a confidence level of >99%. Therefore using a the statement > 90% is on the conservative side.

      • Brian H says:

        Levels of “90%” and “95%” are an abomination, suited only to the mushy social sciences. Real science demands lotsa 9s. >:(

    • Brian H says:

      “bbbeard says:
      March 1, 2012 at 9:45 am

      You should be using 1-tailed p values, since you are trying to demonstrate underrun, not just disagreement. But in any case you can’t just multiply the p values to find the likelihood of the observed trend.”

      You’re switching the question. That’s not what he was computing. He wanted to know “The probability that randomly all of them (the years’ readings) lie so far below the scenarios”.

      Nice try, but no gold star.

      • bbbeard says:

        You’re switching the question. That’s not what he was computing. He wanted to know “The probability that randomly all of them (the years’ readings) lie so far below the scenarios”.

        No, I’m not. This is a common fallacy. People think it’s okay to treat p values like ordinary fractions of an ensemble and just multiply a bunch of them together and treat the really small number that results as proof of something. In fact it is so ingrained in people to believe that you should just multiply p values to get another p value that I’m pretty sure I won’t convince you on the first go-round. But I’ll give a couple of plausibility arguments and then provide a numeric experiment in R that you can play with.

        (1) First, you do realize that no matter what relation the observed temperatures have to the models, there is a p value less than one for each data point, right? For example, if the first data point had been above the model line by 1 sigma instead of below, then its p value would been ____ instead of 0.32. Actually, as I pointed out, the 0.32 is a two-tailed p value and should really have been 0.16. The p value of a point above the line should have been 0.84. (In R this is pnorm(1)). (If you insist on sticking with a two-tailed p value, then the p value of a point 1 sigma above the line would also be 0.32, because it is just as far from zero as 1 sigma below.) Well, what happens when you string 6 numbers together that are all p values? If they’re centered around 0.5, as p value should be, then the resulting product is something around (1/2)^6 =1.6% if they are clustered around 1, and more like 6!/7^6=0.006 since the p values are spread around. In other words, the product of p values does not behave like a p value. If you do a statistical test that provides a p value output, as a general rule you want the p values to have a uniform distribution for “random” inputs. Just multiplying the p values gives you instead a number that goes to zero as the size of the sample increases, which makes it a really poor choice for a statistic that expresses how unlikely something is.

        (2) Here’s another example. Suppose I have a classroom of 30 children who are “average” for their age. Suppose as an experiment I measure all their heights and compare them to the published mean and standard deviation for their age group. It turns out that the mean and standard deviations of my class are pretty close to the published norms. But someone mischievous comes along and tells me that the class is in fact extremely — extraordinarily — astronomically! –shorter than average. They “prove” this by converting all my measurements to p values and then multiplying them. I do this as instructed and I get a number somewhat smaller than 10^-12. That’s less than 1 in a trillion, so clearly this is a publishable result! I tell Mrs. Smith in the classroom next door, and she repeats the experiment with her class, except she uses the complement of the p values to find out how extraordinarily tall her class is. She also gets a number less than 1 in a trillion, so we decide to publish jointly. What did we do wrong?

        Well, that’s why the chi-squared test was invented. With chi-squared, the assumption is that the N points are drawn from an N-dimensional normal distribution (N=6 for the temperature data, N=30 for the classroom), and basically computes the squared length of the vector of Z values (chi squared is just this squared length). The test asks the question “How unlikely is this vector?” and answers it by computing the weight of the N-dimensional Gaussian probability distribution outside a ball of radius chi. This actually does give a p value that is uniformly distributed between 0 and 1 and so is useful for answering the question of likelihood.

        If you have access to R and know how to use it, do this experiment. Generate 6 normally distributed Z values. Compute the normal p values. Multiply. Repeat 1000 times and see what you get. Paste these commands into R:

        psave <- matrix(0,1000)
        for (j in 1:1000) psave[j] mean(psave)
        [1] 0.01265486
        > sd(psave)
        [1] 0.02812416
        > min(psave)
        [1] 5.723555e-09
        > max(psave)
        [1] 0.3867661

        Now do this: Generate 6 normally distributed Z values. Square them. Add them up. Compute the chi-squared p value of this number using 6 degrees of freedom. Repeat 1000 times and see what you get. Paste these commands into R:

        psave <- matrix(0,1000)
        for (j in 1:1000) psave[j] mean(psave)
        [1] 0.4958259
        > sd(psave)
        [1] 0.2898467
        > min(psave)
        [1] 0.0003332355
        > max(psave)
        [1] 0.9986057

        which confirms that the mean is close to 1/2 and the sd is close to 1/sqrt(12), as expected.

        Nice try, but no gold star.

        You’re going to have to stay after school and clean the erasers. ;-)

        BBB

        • bbbeard says:

          I’m sorry but the R commands did not come through because the assignment token in R (“less than, minus”) was mangled by the HTML parser. Shoulda thought of that. Here’s an alternate expression using that new-fangled equals sign:

          First code chunk:

          psave = matrix(0,1000)
          for (j in 1:1000) psave[j] = prod(pnorm(rnorm(6)))
          plot(ecdf(psave))
          mean(psave)
          sd(psave)
          min(psave)
          max(psave)

          Second code chunk:

          psave = matrix(0,1000)
          for (j in 1:1000) psave[j] = pchisq(sum(rnorm(6)^2),df=6)
          plot(ecdf(psave))
          mean(psave)
          sd(psave)
          min(psave)
          max(psave)

          Also sorry about the misspellings; a previewer might be nice ;-)

          BBB

        • bbbeard says:

          Some text was lost because of the HTML mangler: the last few paragraph should have read

          If you have access to R and know how to use it, do this experiment. Generate 6 normally distributed Z values. Compute the normal p values. Multiply. Repeat 1000 times and see what you get. Paste these commands into R:

          psave = matrix(0,1000)
          for (j in 1:1000) psave[j] = prod(pnorm(rnorm(6)))
          plot(ecdf(psave))
          mean(psave)
          sd(psave)
          min(psave)
          max(psave)

          You’ll get different values every time you run this, but you will always get a plot that is heavily weight toward the 1%-ish numbers. The last time I ran this, I got
          > mean(psave)
          [1] 0.01265486
          > sd(psave)
          [1] 0.02812416
          > min(psave)
          [1] 5.723555e-09
          > max(psave)
          [1] 0.3867661

          Now do this: Generate 6 normally distributed Z values. Square them. Add them up. Compute the chi-squared p value of this number using 6 degrees of freedom. Repeat 1000 times and see what you get. Paste these commands into R:

          psave = matrix(0,1000)
          for (j in 1:1000) psave[j] = pchisq(sum(rnorm(6)^2),df=6)
          plot(ecdf(psave))
          mean(psave)
          sd(psave)
          min(psave)
          max(psave)

          You will get a plot showing that the p values are uniformly distributed. The last time I ran this I got
          > mean(psave)
          [1] 0.4958259
          > sd(psave)
          [1] 0.2898467
          > min(psave)
          [1] 0.0003332355
          > max(psave)
          [1] 0.9986057

          which confirms that the mean is close to 1/2 and the sd is close to 1/sqrt(12), as expected.

          [sorry]

          BBB

        • Brian H says:

          You’re still not asking the same question. If the model results cluster randomly around the true data trend, then there is an even chance each reading will be on a given side. Using Z scores to require at least a 2-sigma deviation to count as above or below reduces that probability for each from .50 to about .01.

          So the Model Believer predicts that none of the readings will fall outside the 2-sigma band, and the Model Sceptic predicts all of them will. Testing of the latter, does, I think, permit multiplying (n-1), or even n, of the probabilities.

  9. Mooloo says:

    Since we only have one realisation of observed climate, it is more appropiate to compare this with the individual model runs. What you appear to have done is junk.

    Fine. Which model do you want him to compare it to? Name one.

    If you pick one that runs high, then the divergence will be obvious. If you pick one that runs low, then what catastrophic AGW? Picking individual models will not save them, so your quibble is just that, a quibble.

    Personally I think that averaging models is crazy. But it is the IPCC that does that, not Clive.

    • Ben of Houston says:

      The point of this is to show that the graph as-written is wrong. You have two possibilities. Either the IPCC miscalculated the mean or that they underestimated their error (the “individual runs can be lower”). Either way, it’s wrong.

  10. Bishop Hill says:

    Hadcrut4 will revise the observed temperatures upwards by adopting something like the GISS extrapoloation approach to Arctic temperatures.

    Problem solved.

  11. Brian H says:

    What’s this “90%” bumpf? Those are standards used only in the pseudo-sciences, like psychology.

    I’d say the p of 10^-14 means there’s a 99.999999999999% probability the IPCC is wrong. Which is about what I always estimated, give or take 0.000000000002%.

    >:)

  12. Brian H says:

    Gary;
    You don’t get to say that afterwards. That’s splatter from crap thrown at the wall. You have to specify in advance which model is going to be close enough that reality touches even the outermost fringe of its error bars (which appears to be the standard you accept as proof!).

    So, pick one or two, and lets check back in a year or three, shall we?

    Har, har, de-har.

  13. Beesaman says:

    I doubt Richard Black of the BBC will be reporting this!

  14. John says:

    I would like a small extension for the ethicist that came up with this idea, lets call it extended postpartum abortion that would permit the abortion of the ethicist.

  15. Richard111 says:

    This layman’s observation; natural events don’t normally proceed in straight lines, nature wiggles. The IPCC must be ignoring a lot of data to produce those consistent straight lines.

  16. ImranCan says:

    Its good to refute the flawed science (garbage) that has been published and to keep pointing out the irreconcilable differences between the model predictions and actual observations, but your statement at the end indicates a significant misunderstanding of what is going on here :
    “…….before the IPCC admit that they have simply got it wrong ?”

    This will never happen because the argument has nothing to do with science or scientific truth. It is about global government, social control and wealth re-distribution. All that will happen is that the angle will change and the focus will shift …. but the ideology will remain. Eventually, it will die out, because something better comes along, not because anybody admits that observations don;t match models.

  17. Rhys Jaggar says:

    Given that the IPCC’s models have failed, should a new hypothesis be tested:

    ‘There is nothing in current climate trends which suggests that dangerous climatic change, either in a warming or cooling direction, is taking place’??

    Perhaps another principle might be introduced into national politics also:

    ‘The primary duty of a people’s representatives is to ensure that energy supplies necessary for human existence are provided in a cost-efficient, environmentally friendly manner taking advantage of ongoing advances in science, engineering and technology’.

    Heaven forbid that parliaments operated in the interests of their people.

    What would the elites do if they did??

  18. barry says:

    You described no centring convention, so I downloaded the annual global temperature data from the Met Office. The anomalies are not correct on the graph. Eg, 2010 was 0.5C – you have it at about 0.45C, and the other anomalies in the latter part of the record are likewise offset to slightly cooler.

    But that’s only a small problem. There’s a much bigger one.

    No error bars in the projections – for a < ten-year model-to-obs comparison! The projection lines are multi-model means. For apples to apples you need to compare the range of models in a given scenario.. The mean of the ensembles won't reflect the dynamics of interannual variability, it will be smoothed out, which is obvious just by looking at those curves in the graph. Where, at least, is the two-sigma envelope?

    If you don't include the confidence margins, it's not science, it's rhetoric.

    "Meanwhile CO2 levels have continued to rise in line with all scenarios"

    Really? Are all the emissions scenarios the same?

    Sorry, but this post is rubbish. Let's see a proper effort.

    • Clive Best says:

      The data are all downloaded from HADCRUT3 – so the values are correct. The plot you are complaining about is taken directly from the AR4 report. They never show error bars for some reason on the data. I was just overlaying the new data post-2000.

  19. Jack Greer says:

    This is exactly the type of BS that will get you cross-posted at WUWT every time. You can’t be serious, Clive.

    http://www.realclimate.org/index.php/archives/2012/02/2011-updates-to-model-data-comparisons/

    • Clive Best says:

      The realclimate graph is actually a work of genius!

      1. It renormalises anomalies to 1980-1999 making a direct comparison difficult with TS.26
      2. It overlays GISTEMP and NCDC so your eye doesn’t notice the large HADRUT3 discrepancy since 2000.
      3. It strangely then adds a large 95% shading area for “hindcasting”. Since all models know full well the temperature starting in 1990 or even 2000: Why would they hindcast from a value 0.2 degrees above it ? How also did they know in advance about Mount Pinatubu ? It is just the forecasting that matters – as accurately represented in TS.26.
      4. The added shading then has the effect of reducing the overall slope of the central area so you don’t notice just how low Hadcrut3 is compared to the prediction.

  20. Since the so called original data has been cooked to death and “lost” isn’t the entire field of climate astrology and any trend, simulated or not, to be called into question? We don’t and cannot know the true global temperature. Especially to a precision of 0.01 degrees as proposed by the above charts. Isn’t any statement about it other than we really don’t know flat out misleading? As I see it, it is all a pile of synthetic BS not worthy of being discussed even as low quality science fiction and fantasy.

    • Brian H says:

      The very concept of a “true” global temperature is bogus, in fact. If I add X amount of heat to a cubic mile of dry air over the Sahara I will get a certain rise in temperature. If I add the same amount of heat to a cubic mile moisture-saturated air over the ocean, I will get a small fraction of the same rise. Yet for “averaging” purposes, such blocks of air are given the same “weight”.

      This is nonsense.
      Some time ago, there was a hack piece on Judith Curry (a climate scientist who has been questioning the exaggerated certainty claims) in Scientific American, and one commenter who had been an official reviewer of the IPCC reports from the beginning wrote in, mentioning this and other issues:

      Climate Heretic

      14. Iconoclast 05:06 PM 10/23/10

      The proposition that the average temperature of the earth’s surface is warming because of increased emissions of human-produced greenhouse gases cannot be tested by any known scientific procedure

      It is impossible to position temperature sensors randomly over the earth’s surface (including the 71% of ocean, and all the deserts, forests, and icecaps) and maintain it in constant condition long enough to tell if any average is increasing. Even if this were done the difference between the temperature during day and night is so great that no rational aveage can be derived.

      Measurements at weather stations are quite unsuitable since they are not positioned representatively and they only measure maximum and minimum once a day, from which no average can be derived. They also constantly change in number, location and surroundings. Recent studies show that most of the current stations are unable to measure temperature to better than a degree or two

      The assumptions of climate models are absurd. They assume the earth is flat, that the sun shines with equal intensity day and night, and the earth is in equilibrium, with the energy received equal to that emitted.

      Half of the time there is no sun, where the temperature regime is quite different from the day.

      No part of the earth ever is in energy equilibrium, neither is there any evidence of an overall “balance”.

      It is unsurprising that such models are incapable of predicting any future climate behaviour, even if this could be measured satisfactorily.

      There are no representative measurements of the concentration of atmospheric carbon dioxide over any land surface, where “greenhouse warming” is supposed to happen.

      After twenty years of study, and as expert reviewer to the IPCC from the very beginning , I can only conclude that the whole affair is a gigantic fraud.

      Every paragraph a gem.

  21. Ken Green says:

    You ask: “Will we have to wait another year for the 2012 data to be published before the IPCC admit that they have simply got it wrong ?”

    May I humbly suggest that you’re using the wrong metric here?

    What you need to be measuring, to determine the probability that the IPCC will admit its error is the temperature in the region colloquially referred to as “hell.”

    When the temperatures in that region are below zero (C) for long enough that the entirety of the area is covered in snow and ice, only then will the probability of IPCC confession begin to move (glacially) toward 1.0

  22. Dr. Best:

    Those “predictions” are actually “projections” and while predictions are falsifiable, projections are not.

  23. Jack Greer says:

    As best I can tell, you’re taking the error range concerning the accuracy of individual reported temperature values and trying to represent that error value as applicable to the temperature time series, which is an entirely different analysis …. that makes no sense at all.

  24. Bill says:

    typo in poster: “Models have overestimate warming”
    should be “models have overestimated warming” or “models overestimate warming”

  25. Jenny says:

    What does ” a smoothed FFT fit” mean?

    • Clive Best says:

      Basically it is a low pass filter which removes high frequency noise and is supposed to leave just the low frequency (smooth?) signal.

      Real Reason: I basically can’t afford an IDL license – some I am using a plot package with some build in smoothing algorithms – this is one of them !

      • FFT means “Fast Fourier Transform”, which means fitting a series of sine-waves to the data. The problem with an FFT is that it pre-assumes cyclicity, and is incapable of modelling data that exhibits a continuous trend of change – which is a pretty significant bias in this case!

        You should only use an FFT to filter out cyclical noise, such as the 50-Hz buzz from AC electricity, or seasonal variation in a CO2 signal. You could equally well have chosen an exponential fit, or a polynomial – each of which carry biases too, and may provide equally misleading impressions.

        Any time you curve-fit to data, you get a nice, smooth mathematical curve which looks convincing, but covers up the fact that the curve you get is highly dependent on the underlying mathematics of the fitting, and also on the range of data you’ve fitted. I notice that the IPCC graph includes the whole 20th Century, but the FFT-fitted graph only shows 1985 onwards.

  26. phaedrus says:

    Dr. Best – good stuff – a valuable post.

    In reference to, “I fully accept the basic physics of AGW leading to ~1 degree warming for a doubling of CO2.”. There seems to have been many equivalent statements in the skeptical blogosphere over the last week or so. Would you provide your favorite 2 or 3 (or more) technical references establishing this result? Not saying it is not true – just like to review the physics to verify for myself – in particular the work that convinced you.

    Thank you for your time.

    • Clive Best says:

      I have tried to summarise most of what I have learned about climate science here:
      http://clivebest.com/blog/?page_id=2949

      The basic greenhouse effect follows a logarithmic law. The more CO2 in the atmosphere the higher the level where the “fog” clears and IR photons escape directly into space. It gets colder with height (unless air is moist) so slightly less radiation escapes – balanced by a slight correction in warming from the surface.

  27. Rosco says:

    The really funny thing about the AGW con is that the “true believers” call opponents “deniers”.

    Only a blind fool could miss the “Inconvenient Truth” that it is the true believers who are the deniers – they deny reality with their ludicrous theory which has NO substantiated evidence to support it.

    They got the whole basis for their theory wrong by missing the really obvious fact that during the day the atmosphere acts to reduce the heating effect of the solar radiation – not add heat as they idiotically claim.

    Is there any proof ??

    You betcha – because without an atmosphere to REDUCE the heating effect of the solar radiation during the day – and after all during the day is all that matters as there is NO solar radiation at night – I thought that needed explaining to people who deny reality – the Earth would be subjected to temperatures like the Moon – about 120 degrees C.

    After all, both the Earth and Moon are subject to the same intensity of solar radiation !!

    So – CLEARLY – the Earth’s atmosphere actually REDUCES the heating impact.

    Only a real DENIER could argue that isn’t true.

    So the real “deniers” are actually the “true believers” – those who deny reality in favour of their pet “religion”.

  28. DocMartyn says:

    You can use the cumsum or Z-plot method.
    You do a cumulative addition by year, which gets rid of ‘noise’ over time.
    Then plot the sumative plots vs n. Calculate the slopes and 90% confidence intervals.
    Where the plots diverge there is less that 0.05^2 chance that points are part of the same population.
    This works very well if you do not know the line shape. Plotting true vs model should give one noise and a slope of zero.

  29. jmrSudbury says:

    It seems that the Foster Rahmstorf 2011 paper yields a linear trend line that is akin to TAR’s line on your figure three. Neat. — John M Reynolds

  30. edbarbar says:

    I think I’m missing something. Did the IPCC provide probability bounds for their estimates? I thought they had not provided this important information.

    It seems to me at some point it will be possible to determine what those probabilities are, and then it would be interesting using that to determine the range of temperatures possible within a std deviation into the future based on their models.

  31. Pingback: Clive Best: Day of reckoning draws nearer for IPCC | JunkScience.com

  32. Brian H says:

    Rhys Jaggar says:
    March 1, 2012 at 2:13 pm

    Given that the IPCC’s models have failed, should a new hypothesis be tested:

    ‘There is nothing in current climate trends which suggests that dangerous climatic change, either in a warming or cooling direction, is taking place’??

    What you describe is called the “Null Hypothesis”, often written as H0. It is the assumed, least complex explanation that must first be disproved before you get to consider alternate hypotheses (H1, H2, etc.) There are layers of “null” hypotheses that might have to be discounted/disproven before you get to have some “confidence” that your alternate is the best bet, so to speak.

    The closest the IPCC and UEA/CRU get to this is to say that they couldn’t make their models reproduce the 1970-2000 warming in their computer models without giving a dominant “forcing” weight to human CO2. That this logic is deeply circular and flawed is often pointed out by modeling experts, but ignored by the IPCC. Kevin Trenberth has gone so far as to demand that AGW be given the status of “H0″, and assumed to be the truth, and all other ideas be required to disprove it. I call this outrageous distortion the Trenberth Twist.

    The IPCC reports say that they assign a “high” probability of CO2’s dominance of >90% using (their own) “expert” judgment. This is rank BS, of course. Statistically, in any case, 90% doesn’t even buy you a very interesting speculation, much less scientific “likelihood”. Pure bafflegab.

  33. Brian H says:

    Edit:
    “… they couldn’t make their models reproduce the 1970-2000 warming in their computer models without …”

  34. Paul MacRae says:

    It’s interesting that the model-to-temperature correlation is very good, near-perfect, really, up to 2006, then starts to diverge. So, the models were good until just before the IPCC came out with its 2007 report, using 2006 data, then started to fail. (And, yes, I realize six years isn’t a long time in climate terms. But if a model can’t predict short-term trends, we should be suspicious about how accurate it is in the long term, too).

    If we were to see the original pre-2006 models—before they’d been “backcasted” to fit the temperature data—we might find that they, too, were out of whack, i.e., not accurate. It’s easy to “adjust” a model fit past temperatures. What Clive’s graph shows is that the models aren’t very good when it comes to forecasting the future. Which is what model predictions are supposed to do, isn’t it? Accurately predict the future?

    I’ll let the IPCC 2007 report have the last word since, for once, the IPCC has got it right: “We are dealing with a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible.” And then they go ahead and predict. No wonder their predictions are so often wrong.

  35. Space says:

    In 2009 scientists from the MET Office tested IPCC models. After find 0 trend after 10 years with ENSO adjustment they observed:

    “Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability. The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”

    So at least in some scientist’s minds the models can be falsified. This MET information can also be found in the American Meteorological Society’s 2008 “State Of The Climate” report published in 2009.

    http://www.metoffice.gov.uk/media/pdf/j/j/global_temperatures_09.pdf

  36. Pingback: Global warming roundup | Western Free Press

  37. Pingback: Climatemonitor

  38. Pingback: The Strata-Sphere » Why Global Warming Alarmists Must Cheat, Steal & Lie

  39. Pingback: «I ghiacci artici ai livelli del '79» - Pagina 30

  40. Pingback: Warmist day of reckoning « Jim’s Blog

  41. J. Seifert says:

    IPCC models and forecasts are based on the 1850 or 1750-2000 warming
    period. This time span is much too short to prove forecasts and are therefore
    wrong from the start.
    Any model, any forecast must be verificable in its basics and its climate change
    mechanisms for paleoclimatic times with minimum 20,000 years, not less….
    You will be able to find this in the booklet ISBN 978-3-86805-604-4
    on the German Amazon.de. The analysis is unrefutable and transparent
    for everyone.

    The day of reckoning will be within 5 years, because the present temp
    plateau will not have any warmer temperatures above the plateau observed
    since 2001. This will get the public increasingly more critical and the Skeptics
    will get more say in the media…
    JS., book author

  42. Brian H says:

    As soon as challenged, the Hokey Team & Assoc. are quick to take refuge in the “projections” claim. Then they (and their political masters and puppets) go right back to treating them as predictions. A cute trick! Authority without accountability; nice work if you can get it.

    From Brignell’s “Number Watch” site, in The Laws section:

    The law of computer models

    The results from computer models tend towards the desires and expectations of the modellers.

    Corollary

    The larger the model, the closer the convergence.

  43. Pingback: Militant Libertarian » Day of reckoning draws nearer for IPCC

  44. I put a note up about this post on my own desultory site at http://phenell.wordpress.com/2012/03/04/global-warming-can-we-go-home-yet/#comments, and I notice that someone with the purported name of erschrodinger (but not, one has to conclude, the real Erwin Schrödinger whose wave equations I grappled with whilst I was at university, what with the real Schroedinger having died 50 years ago) reckons that your observed data is “totally fictious” (sic).
    Is there any point in referring him to the basis for your figures?

  45. Space says:

    Max Planck Institute Director Confirms: “Climate Models Inconsistent With Observations”

    http://thegwpf.org/science-news/5493-max-planck-institute-director-confirms-qclimate-models-inconsistent-with-observationsq.html

  46. Barry Dwyer says:

    How do I unsubscribe from this comment thread. The link below leads to a message
    “You may not access this page without a valid key”

    Please Help

Leave a Reply