There has been a lot of discussion (see here) about the treatment of errors by Marcott et al. through their Monte Carlo analysis. Since I avoided any interpolation or Monte Carlo by using only the measured values for each proxy, the error analysis for my reconstruction is straightforward. I have calculated the standard deviations, the error on the normal and the anomaly error for each proxy between 5000 and 4500 years BP. The results are given here.

The typical statistical error for a 100 year bin for a single anomaly measurement is 0.6. The statistical error on the overall global mean can be derived from sigma(mean) = sigma/srqt(n) where n is the number of proxy contributing to the bin. This numbers contributing to each bin are shown here.

Figure 1 shows the data with a 90% confidence level shading based on a 2 sigma error. The smooth curve is an FFT filter smoothing. Figure 2 shows the same result overlaid with the published graph in Science.

Fig 1: Global averaged temperature anomalies of 73 proxies with 100 year binning and redated. Shown in Yellow is the 2 sigma 90% confidence level.

Fig2; Comparison with published Marcott et al. based on Monte-Carlo derived errors.

**Conclusions**

The statistical errors are about 50% larger than those derived by Marcott’s Monte Carlo analysis. However, there is good agreement between the two approaches for the long term trend. The previous post demonstrated that the data are insensitive to any rapid temperature variations of duration less than ~400 years. For this reason alone there is also no evidence in the data for a 20th century uptick. A perfect renormalisation of anomalies from 5500-4500 YBP to 1961-1990 is also needed to splice on the instrument data.

*Related*