New: Simulation of a 20% uncertainty in proxy resolution time and peaks practically disappear.
Have there been previous warming periods over the last 10,000 years comparable to current warming ? If so would Marcott proxy data have been able to detect them? Tamino has a post where he attempts to prove that indeed such peaks would have been detected and therefore there is nothing comparable to the recent warming spike. In order to show this he artificially generated three 200-year long triangular peaks of amplitude 0.9C at dates 7000BC, 3000BC and 1000BC. These were added to the underlying data and processed as before. The resultant signals would have easily been detected (he claims).
Lets do exactly the same thing on the individual raw proxy data and then process them in the same way as I described previously. I simply increased all proxy temperatures within 100 years of a peak by DT = 0.009*DY, where DY=(100-ABS(peak year-proxy year)). Each spike is then a rise of 0.9 deg.C over a span of 100 years, followed by a return to zero over the next 100 years. The same three dates were used for the peaks as those used by Tamino. The results are shown below for anomaly data binned in 50 year time intervals.

Fig1: Comparison of proxy anomalies (relative to -3550 to -2550BC) with and without added peaks (shown in red). The curves are smoothed 5 point averages. The blue arrows show potential real peaks in the data including the Medieval Warm Period.
The peaks are indeed visible although smaller than those claimed by Tamino. In addition, I believe we have been too generous by displacing the proxy data linearly upwards since the measurement standard deviation should be properly folded in. I estimate this would reduce the peaks by ~30%.
What I find very interesting is that there actually do appear to be smaller but similar peaks in the real data (blue arrows), one of which corresponds to the Medieval Warm Period !
New: Simulation of a 20% time resolution error in proxy measurement time.
Several people have commented that proxies measure an average temperature over an extended period rather than instantaneously, plus there is also uncertainty in the exact timing of the measurements. I have looked into this effect by simply randomizing the proxy measurements within a window of 20% of the proxy resolution. So a proxy with a resolution of 100 years is moved randomly to a new time within +- 20 years of the recorded time. The data is otherwise treated exactly the same way as before but now the new time is used to calculate the peak increment. This simulates proxy time synchronization. The result is shown below.

Fig2: Result after adding peak based on a randomized time within 0.2*resolution. Effectively only the peak at 3000BC is still discernable.
Uncertainties in the time synchronisation across proxies effectively smears out the peaks completely. With a 20% of resolution random time shift the pulses are essentially smeared out. Only the pulse at 3000BC remains a candidate. Note this does not simulate long Proxy measurement times.
The code used in this post can be downloaded here. There are 3 PERL scripts.
– convert.pl reads the proxy data from the excel spreadsheet and creates “station files” like Hadcrut4.
– marcott_gridder.pl bins the data both in time(50y) and space (5×5 deg)
– marcott_global_average_ts_ascii.pl reads the grid and calculates the global averages – as plotted above.
Place the Marcott excel spreadsheet (provided with the Science Paper’s supplementary material) in the working directory. Convert it to .xls, remove the two spurious zero temperature entries in Proxy 62, and ensure the metadata sheet cells do not span 2 rows.
Type:
>mkdir station-peaks >cd station-peaks >perl ../convert.pl >cd ..
>perl marcott_gridder.pl | perl marcott_global_average_ts_ascii.pl > peaks-anomalies
The last 2 scripts are modified versions of the Met Office analysis software for CRUTEM4 – British Crown Copyright (c) 2009, the Met Office.
Dr. Best, thank you – may I ask whether you’ve considered publishing online the code used for the analysis ? It would be a useful move.
Yes I will publish the code shortly. There are 3 PERL scripts.
– convert.pl reads the proxy data from the excel spreadsheet and creates “station files” like Hadcrut4
– marcott_gridder.pl bins the data both in time(50y) and space (5×5 deg)
– marcott_global_average_ts_ascii.pl reads the grid and calculates the global averages – as plotted above.
Thank you!
Yes, yours is the way the output looks when done correctly. I gave Tamino a couple of brief replies pointing out that his peaks hadn’t broadened at all and that therefore he had done it wrong (obvious to anyone who understands data), but he did not print either of my replies.
It’s frankly amazing to see his acolytes wowed by his results, when it’s obvious from first principles that he flubbed the analysis. By the way, Clive, you posted a link on that page which was meant to point here but took me to a different page.
I also had several earlier comments to Tamino’s post ignored ! The trick is to halfway agree with him !
Yes the link was wrong – I tried to correct it in a reply.
Tamino has blocked me from commenting again ! Strange how he calls his blog “Open Mind” . I was trying to make the point that adding an artificial peak at a single time across all proxies assumes that they are all time all synchronised perfectly. This is a different problem to their individual time resolutions of ~100-300 years. Therefore allowing for a half interval time shift, the signal is more likely to be observed as shown by the running 250 year average above.
Yes, “Open Mind”, “SkepticalScience” etc. What a tangled web they weave. Pleased to see you on the scene, Clive.
Thank you for this post. I think the Marcott study has the potential to affirm CAGW. If however the study is flawed, it should discredit this hypothesis entirely. Instead of confirming Mann’s work, it would instead confirm the lack of integrity rampant within the climate science community.
I’m frankly blown over by how clever Marcott is. With a doctored graph and a headline he could have as big an impact as 1998 or the release of “An Inconvenient Truth”.
I was a great AGW adherent from ’89 through ’03 but have been a determined skeptic for the last decade. If the Marcott study holds up, this seemingly runaway train (climate hysteria) is gone. The Marcott study looks to undermine what I believe is the skeptics most significant talking point – that our modern temperature spike is not unprecedented and not unlike many documented spikes and valleys observed in temperature records of the last 10,000 years.
It’s a little scary for me as I have been quite vocal in my skepticism and I would hate to be proved wrong. It must be terrifying for the climate scientists, politicians, green warriors, universities and NGO’s.
“The Marcott study looks to undermine what I believe is the skeptics most significant talking point – that our modern temperature spike is not unprecedented and not unlike many documented spikes and valleys observed in temperature records of the last 10,000 years.”
That is precisely why it was written. The paper and it’s bruited release were too contrived, to overblown and too clever by half. It may not sink like Gergis, but it’s been severely wounded, as has the whole science-by-press-release process.
I forgot to thank Dr. Best for a credible analysis on this subject. While I’m not capable of judging it fully, I find it laughable that people like Tamino can claim certainty on these matters. If Dr. Best has credibly and persuasively shot a hole in Tamino’s hot air balloon, I congratulate him.
Dr. Best,
Wouldn’t there need to be a second hurdle to jump in order to see your ‘bumps’ through to reality? Namely, the physical mechanism…
We do have a LIA and a MWP and a RMP … Short of naming these other peaks after other stuff (Soloman’s Warming Period anyone?), wouldn’t we need to come up with something that can also explain such an aberration? Cold spikes can have easier roads to physical basis (volcanic eruption), but what about a warm spike (and return) in such a short amount of time? Wouldn’t it also be a burden on the part of the postulator to also seek/provide such possible bases?
At the risk of being labeled a ‘concern troll’ (which apparently comes with the territory of ‘half-agreeing’ with people) … I’m concerned that this sort of expose, when not combined with something further, introduces you to a charge of “peddling doubt”.
There is evidence of a regular 60 year climate oscillations see: https://clivebest.com/blog/?p=2295 I suspect that is why the earth warmed quickly between 1970-1995 and why it is now at a standstill. Warming will only continue after 2030. There is a CO2 warming effect – but it isn’t catastrophic ! We just have to stabilize emissions at a fixed level and the climate will stabilize. The problem right now is continuing acceleration of emissions. We have another 60 years or so to stabilize them globally.
Stabilization of emissions will lead to an steady increase in concentration, because CO2 has a long (net) lifetime. Emissions will have to equal removal rates (which in turn depend on the concentration) for concentration to stabilize.
See e.g. http://climateinteractive.wordpress.com/2013/03/19/two-myths-lets-level-off-emissions-reverse-climate-change/
Bart
Not so sure I agree about this. Lets imagine a stabilization of CO2 emissions at current values of 5.5 Gtons/year.
Assuming an atmospheric lifetime Tau for CO2. A simple for this is to assume that once a year a pulse of N0 = 5.5 Gtons of CO2 is added to the atmosphere due to fossil fuel emissions. This then decays away with a lifetime Tau while other yearly emissions are added. Then the accumulation of fossil CO2 in the atmosphere for year n is simply given by.
CO2( n) = N0( 1 +sum(i=1,n-1) (exp(-n/Tau)))
If we assume that n is very large then we can treat this sum as an infinite series and the atmosphere will eventually saturate at a certain value of anthropogenic CO2 concentration.
Multiplying both sides by exp(1/Tau) we can derive that the sum in the limit as n-> infinity is
CO2(n) = N0/(1-1/exp(1/Tau))
Taking some possible values for Tau (see estimates above) we can calculate:
“We just have to stabilize emissions at a fixed level and the climate will stabilize.
That would be the case, if we were driving atmospheric CO2. But, we aren’t. Atmospheric CO2 is overwhelmingly a function of temperature, not of emissions. The relationship is approximately
dCO2/dt = k*(T – To)
dCO2/dt = rate of change of CO2
k = coupling factor in ppm/degC/unit-of-time
T = global temperature anomaly
To = equilibrium level of temperature
With this equation, and the appropriate affine parameters, you can reconstruct the entire record of atmospheric CO2 concentration since the advent of precise measurements 55 years ago, as I show
.html?sort=3&o=6″ rel=”nofollow”>here using the GISS series, to high fidelity. The relationship leaves no room for significant human forcing, and human inputs are evidently rapidly sequestered.
This type of relationship is to be expected in a continuous transport system where CO2 is constantly entering and exiting the surface system, and the differential rate at which it does so is modulated by surface temperature.
That’s a nice relationship. However there are two ways to interpret this. The rate of change of CO2 is proportional to the temperature change or the inverse. It all depends on whether you think CO2 is the cause of warming or whether it is the effect of warming. I think changes in CO2 were the effect of temperature changes during the last million years. However anthropogenic increases in CO2 is still the most likely candidate for the moderate warming seen in the last 100 years or so.
No, that just doesn’t work. If the rate of change of CO2 were driving temperature, then we could drive it up to 1,000,000 ppm and, as soon as we stopped, the temperature would drop back to it equilibrium.
This is a standard causal integral relationship and, as I said, this type of relationship is to be expected in a continuous transport system where CO2 is constantly entering and exiting the surface system, and the differential rate at which it does so is modulated by surface temperature.
That may have come out a little garbled so, let me try to be clearer. If the rate of change of CO2 is driving temperature, then as soon as it stops changing, temperature settles to its equilibrium, regardless of the absolute concentration of CO2 which is left behind.
This is an absurd result, hence we must conclude the converse, that temperature is driving the rate of change of CO2.
“We do have a LIA and a MWP and a RMP”
Are you saying these do not have a physical explanation, therefore they did not exist?
Are you saying these do not have a physical explanation, but you accept that they existed, but similar events did not happen in the previous 11300 years, because, so there.
I am saying that they did existed because there is independent evidence to support them. I am just saying that the proxy data have insufficient resolution to detect them properly They instead will be smeared out to just a slight bump so long as they last >300 years.
Even Marcott’s paper states.
Dr. Best,
Thank you. I understand that this is just an exercise to see how a clear signature in the proxies would pass through the Marcott analysis — so, from that perspective, it’s just a test of the analysis itself.
But, if instead, the point is to actually answer this question: “Have there been previous warming periods over the last 10,000 years comparable to current warming ? If so would Marcott proxy data have been able to detect them?”, then this analysis doesn’t seem to fully suffice.
This analysis assumes that the proxies capture the actual temperature signal 1:1 linearly. That is, a spike in temperature is equally reflected as the same spike (whether shifted in time or not) by the proxy. That the dynamic ranges of the proxies, as a group, are the same as those of the actual temperatures. That there is no attenuation of signal such that the dynamic ranges are the same, e.g.
Looking at the proxies in Marcott, I find it hard to believe that this assumption holds. And, if it doesn’t, then this analysis only tells you what spikes in the proxies might do. We’d still need to answer the question as to whether the proxies capture the full dynamic range of changes in temperature.
This is not my field of expertise, so perhaps I’m quite mistaken, but finding highly accurate temperature proxies seems quite a challenge.
Yes you are correct – it is an artificial spike. It assumes a perfect time synchronization across all the proxies. It also assumes that there is no regional variation so that the entire Earth warms everywhere by 0.9 degrees over 100 years and then cools back again over the next 100 years. So it is a fairly extreme form of global warming. Even then the “observed” peak is much reduced and washed out.
In addition we should really introduce a time jitter into the analysis with a sampling error of around 30% of the time interval for each proxy. That is why I claim that all that would be actually be left are the small red bumps. Lo and behold there are several small real bumps including the MWP.
I graciously created a comment for Tamino pointing out two apparent flaws in his analysis, and suggesting workarounds. That too has not seen the light of day.
To a person familiar with how data works it is visually obvious that his methodology is in error. One vs 100 vs 1000 perturbations in his method produce almost identical spike results, but with ever-smoother background data.
I believe your methodology accomplishes the two corrections I sense are necessary (and more):
1) Real world data would never introduce exactly identical spikes in X (time) and Y (amplitude) in all 73 proxies. Doing so introduces what is essentially a digital defect in the data… the same as an artificial dropout, or a scratch on a CD. When it is so perfectly aligned in the raw data, the defect will naturally survive a wide variety of processing algorithms. Yet this is what Tamino did. In the real world, different proxies will reflect climate in different ways, with at least slightly different temperature responses (assuming they are all temp proxies!) Even real thermometers don’t all produce the exact same signal.
If I’m reading your methodology correctly, it addresses this issue.
2) Tamino’s methods are not fully explained with respect to perturbation and time-adjustment (end cap)… Steve M showed that the decisions underlying those methods and parameters are crucial to whether a spike even shows at all in modern times.
You’ve shared 100% of your methods… which all seem quite reasonable… and interestingly you get a very different result.
My final thought: IIRC, the strange thing about the “spike” in Marcott is that it is not seen in the unprocessed raw data. By whatever means, the spike is a feature of the processing methods and parameters. Yet these simulations of paleo “spikes” involve introducing raw-data spikes and determining whether the processing will eliminate the spikes. Seems like inverted tests to me!
I’m sensing two questions are important:
a) If there were temp ‘spikes’ in the past, would they be visible
b) How likely is it that processing methods and parameters will produce a spike that’s not in the original data?
Thanks for inspiring further thought!
(I’ll add a variant of this to the comments on RomanM’s new post as well)
In fact I did everything to exaggerate the effect of the spikes becauseI also assume that the all proxies are perfectly synchronized in time and the spike is picked up by each one. The warming is global with no regional variation. However I use only measured values with no smoothing – unlike Tamino. The actual data shows that the spikes would be washed out.
To allow for time jitter between proxies I really should include a random sampling error for each proxy equal to say 30% of the sampling period. Then the signal would be much more washed out – so it becomes much more like the red smoothed bumps. If I find time I will repeat this exercise and randomise the sampling.
Clive, put a link to your analysis on Real Climate, since Tamino is blocking you. They may allow to through.
Did you add the peak data at the granularity of the “20 year interpolation” discussed in your other post? Or did you add it at the original granularity of each proxy and then do your “20 year interpolation”?
I don’t mean to be rude – but do you really believe that a proxy “captures” the value of a single year out of let’s say 200 – basically a “downsampling filter”? Newsflash – it does NOT work that way – any decent proxy that I know of captures some form of AVERAGE over the entire segment – it is more like a “moving average” filter. Try again using this approach and you will see a HUGE difference
I completely agree with you! For that reason the analysis should really randomise the time sampling to say 30% of the time period. Then I am sure that if the peak survives it will become a broad bump similar to the smoothed red bump.
After making the proxies act as in reality the peaks WILL survive and will have a slightly bigger amplitude – similar to that seen in Tamino’s graphs.
That being said even on your graph for instance the “bump” from Medieval Climate Anomaly does not look absolutely at all similar with what is left of the “magic unicorn spikes” – it looks at most 0.1 degrees above the baseline trend from about 600 to 1100 AD – which we know anyway in excellent detail from previous papers like Mann or Moberg.
“like Mann”.
But Mann, as McShane and Wyner pointed out, only used the first and last proxy blocks, ostensibly to test extrapolation skill of his methodology; he failed. He failed because there is nothing real about only using the end blocks, at least to establish real trends, just like a running mean can give false results, especially when a peak in the data is not properly specified.
How quaint to see Mann still being praised.
really? still pushing the MCA the good mann would be proud .
Pingback: Detecting peaks in Marcott data | Climate Daily
Dr. Best,
Thank you. I agree with numerous other people commenting here and in other blogs that both the proxy creation/collection/processing process and the proxy data collation process heavily filter out potential prehistorical temperature spikes (either up or down).
One nuanced aspect of the potential prehistorical temperature spikes is that regardless of the causal mechanism, they demonstrate that the climate system is generally self correcting, keeping global temperatures within certain bounds. This is another reason why people who would have us alarmed about cagw/cc/ew prefer to see any potential prehistorical temperature spikes disappeared.
Thanks again and thank you to whoever it was who provided the link.
Following the comment from atomic-ghost I decided to introduce an uncertainty in proxy time by using the proxy resolution interval. I simply took 20% of the resolution between proxy measurements (for example 20years for a resolution time of 100 years). Then I randomized the times for each proxy between +- 0.2*resolution ( T+- 20 years for the example). I used this shifted time to calculate the pulse value and added it to the measurement. Essentially the 3 pulses now all but disappear.
Fail – not only you completely misunderstood initially how a proxy captures the real-world values but instead of fixing that you now went the complete opposite way and you added extra artificial noise that has no resemblance to how any of the real proxies look in reality. Complete failure 🙁
atomic-ghost: Clive Best’s first troll.
You can see how proxies capture the real-world values here.
Individually they are very noisy, and they vary with location.
Sorry for the brief answer yesterday – I was in a hurry and that will extend to this weekend, so I will have to be again rather brief.
There are two different problems here – how proxies would react to “magic unicorn spikes” and how the analysis will handle that.
On the first problem it is very clear that most proxies (especially alkenones) DO NOT act as a “down-sampling filter” – but instead as a “moving-average filter” – the difference is subtle but in this specific case the resulting “magic unicorn spikes” maximum size captured by a single proxy is quite different, and as a result the height of the spike in the reconstruction will be different.
Regarding the “timing noise” – on this matter the entire Monte Carlo work that Marcott is doing with 1000 perturbed results is exactly about that “noise” and about how to separate long GLOBAL climate trends from short local variations and timing errors in the proxies – and on that approach expert statisticians like Robert Rohde agree it is a valuable one. The output from that specific approach – which would be what Marcott should show if we had “magic unicorn spikes” in the Holocene – looks like this:
Right, so every “magic unicorn spike” in the past is either a “short local variation” or “timing error in the proxy”, whereas the 20thC spike is AGW red and raw of tooth and claw.
Ha, ha.
Marcott says his uptick is not robust; as countless other proxy studies show, it doesn’t exist; it diverges; how unrobust is that!.
As for the past Marcott says this:
“Because the relatively low resolution and time-uncertainty of our data sets should generally suppress higher-frequency temperature variability, an important question is whether the Holocene stack adequately represents centennial- or millennial-scale variability”
The short answer is yes, the low resolution technique used by Marcott does suppress past ‘spikes’; as for Tamino confirming Marcott in showing there were no past spikes, what Tamino has shown is that there were no past spikes of an order bigger than the 20thC spike. So, over the last 10000 years we can rule out spikes of between 10 and 100C.
Well done Tamino! There were no Dansgaard–Oeschger or Bond events in the last 10000 years.
As for trend Marcott has aliased short and long term trends; that doesn’t matter because he still confirms the drastic decline in temperature over the last 10000 years but, as noted, he has removed the interesting and valid ‘upticks’ which not only rival the 20thC ‘uptick’ but surpass it.
As I understand the Marcott graphic, the ‘false’ uptick was emphasized by overlaying the Mann ‘hockey stick’ data that has previously been criticized through its use of bristlecone pines. Tamino and others support this use by claiming that there is no evidence of temperature spikes in the last 10,00 years that compare with the recent Mann spike.
In this respect it is interesting to note the papers of Libby ( a real scientist) and Pandolfi from the 1970’s in which they calculated a fall in temperature over the past 1800 years (to the 1970’s) of 1.5 oC. This supports the view from history that the Roman warm period was much warmer than present, as shown by the areas in which vines and other crops grew, and of course disproves Mann and others.
Among other proxies, Libby and Pandolfi used bristlecone pines which they cross calibrated with ice cores and Japanese cedar, but used the more precise isotope dating method. It would, therefore, be an interesting exercise to recalculate the Mann hockey curve using their data and/or calibration – I assume (with no evidence) that some of their bristlecone proxies must have been used by Mann.
Dendrology Shows Cooling Past 1800 Years
http://www.nature.com/nature/journal/v261/n5558/abs/261284a0.html
Isotopic tree thermometers
Leona Marshall Libby*, Louis J. Pandolfi†, Patrick H. Payton†, John Marshall III‡, Bernd Becker§ & V. Giertz-Sienbenlist?
*School of Engineering, Department of Energy and Kinetics
†Department of Chemistry, Institute of Geophysics and Planetary Physics and School of Engineering
‡Department of Meteorology, University of California, Los Angeles, California 90024
§Landwirtschaftliche Hochschule, Universitat Hohenheim, 7000 Stuttgart 70, FRG
?Tree Ring Laboratory, University of Munich, Munich, FRG
“Evidence is summarised here that trees store a record of atmospheric temperature in their rings. In each ring, the ratios of the stable isotopes of hydrogen and oxygen vary with the air temperature prevailing when the ring was formed. We have shown that the temperature records in three modern trees seem to follow the local mercury thermometer records, and have found that a Japanese cedar indicates a temperature fall of ~1.5°C in the past 1,800 yr.”
Isotopic tree thermometers: Correlation with radiocarbon
1. Leona Marshall Libby
2. Louis J. Pandolfi
http://onlinelibrary.wiley.com/doi/10.1029/JC081i036p06377/abstract
“We have obtained evidence that trees store the record of climate in their rings. In each ring the ratios of the stable isotopes of hydrogen and oxygen vary in proportion to the air temperature when the ring was formed because the isotopic composition of rain and atmospheric CO2 varies with temperature. In this paper the stable isotope variations of hydrogen and oxygen in a Japanese cedar have been correlated with the secular variations of radiocarbon measured in bristlecone pines by Suess (1970). We find significant negative correlations for both isotope ratios over the last 1800 years. The inference is that the small-scale (?1%) variations in 14C concentrations in tree rings are related to climate variations. In our data we find periodicities of 58, 68, 90, 96, 154, 174, 204, and 272 years. Because our samples are averaged over 5 years each, we are not able to detect the 21-year sunspot cycle in the present data. The Suess samples averaged over about 25 years each reveal a periodicity of 183 years, in agreement with our periodicity of 174 years.
Climate periods in tree, ice and tides
Leona Marshall Libby* & Louis J. Pandolfi†
http://www.nature.com/nature/journal/v266/n5601/abs/266415a0.html
*Environmental Science and Engineering, University of California, Los Angeles, California 90024
†Department of Chemistry and Institute of Geophysics and Planetary Physics, University of California, Los Angeles, California 90024
Ten climate periods found in stable isotope ratios of oxygen and hydrogen, measured in 1,800 yr of Japanese cedar rings, agree with climate periods found in 800 yr of Greenland ice and with periods computed from the tidal stresses of the Sun–Moon–Earth system, and with periods found in the 14C record of the bristlecone pine sequence of southern California. The Greenland oxygen ratios have previously been found to have opposite phase to the 14C ratios of the bristlecones, and we have found also an opposite phase between oxygen and hydrogen ratios in the Japanese cedar on the one hand and 14C in bristlecones on the other hand
Yes the original narrative was that the Hockey Stick has been vindicated by Marcott’s uptick and that today’s temperature increases are unprecedented since the last Ice Age.
The uptick has now disappeared. This means that the narrative now relies on instrument data, but can we be sure the two actually line up properly ? This depends on re-normalizing average temperature from 5000 years ago to those of 1961-1990. I simply added 0.3 C, but I have seen no convincing evidence to support this.
For today’s temperature rise to be unprecedented, you have to show that something similar didn’t happen in the past. Hence the evidence that Marcott would have detected it had it happened.
One of the authors Jeremy Shakun has written on RealClimate ….
If you combine that uncertainty with the analysis on time jitter of proxies above then I doubt whether any climate excursions lasting less than about 500 years would be detected in Marcott’s data.
Clive-
Thank you for your work, and your update with the temporal uncertainty added in to the mix. Have you looked at the supplemental material from the Marcott et al. paper? It is available for free. Pages 23-26 are quite revealing-
“9. Signal retention-
Numerous factors work to smooth away variability in the temperature stack. These include temporal resolution, age model uncertainty, and proxy temperature uncertainty. We conducted a synthetic data experiment to provide a simple, first order quantification of the reduction in signal amplitude due to these factors.”
“The gain function is near 1 above ~2000 year periods, suggesting that multi-millennial variability in the Holocene stack may be almost fully recorded. Below ~300 year periods, in contrast, the gain is near zero, implying proxy record uncertainties completely remove centennial variability in the stack. Between these two periods, the gain function exhibits a steady ramp and crosses 0.5 at a period of ~1000 years.”
Figures S17 and S18 are unambiguous regarding the frequency response of the reconstruction. By the way, when they say the gain is ‘near zero’, they mean it is one percent or less of the actual signal amplitude.
So, apparently Tamino argues that Marcott’s spectral analysis (with conclusions similar to yours) of Marcott’s reconstruction is wrong, in order to defend Marcott’s initial claim that modern temperature trends are unprecedented, even though Marcott later backed off that claim. Who’s on first again…?
Meanwhile, it remains durably certain that Mann is a horse’s ass for his claim that the paper shows that modern rates of change in temperature are unprecedented in 11,400 years.
In conclusion, I desperately hope that Marcott is selected as the keystone graphic (SPM covers, stage backdrops, tshirts, coffee mugs, NYT front page) for the IPCC AR5 report.
🙂
Yes I read the supplementary material – but it needs re-reading several times and even then you are not sure you have a clear idea what detailed analysis was done.
Tamino is a smart guy, but his result just looks wrong to me. The proxy data cannot possibly be that sensitive if you go back to the raw proxy measurement data. So a basic problem I have with the methodology used in climate science is that they often resort to complex modeling, monte carlo simulations, interpolations and the like which tend to camouflage the sparse underlying data.
Clive, just an FYI in case you don’t know, the tolerant fascists over at RC move comments they don’t like into a section they call “The Bore Hole”…http://www.realclimate.org/index.php/archives/category/the-bore-hole/….your comments were moved there because there is only one truth to the statist all others must be crushed, ignored or dismissed! 🙂
The first comment was published on the site for something like 14 hours and then mysteriously disappeared. Incredible !
Oh well never mind. I think this demonstrated arrogance rather than science !
Clive: Why was a “20% uncertainty in proxy resolution time” used and precisely what does this mean? If 14C dating of samples and interpolation between dated samples afforded a date of 5000 year before present, are you assuming normal distribution of possible dates around 5000 ybp with a 95% confidence interval for the actual date of the sample of 4000-6000 ybp??? This interpretation seems unlikely to be correct.
Do you know what 95% confidence intervals are usually put around 14C dates?
Frank,
20% uncertainty in proxy time resolution means the following:
– I assume a random uncertainty in the measurement time for each proxy. This means that for a proxy temperature with resolution of 200 years(column h of metadata spreadsheet) recorded as being made at 4000 ybp I assume an error 4000+-40 years BP. I use this value 40 year uncertainty to randomly sample the artificial peaks for that particular proxy.
– I plot the results on the measurement times but with the peaks sampled randomly as above.
Why 20% uncertainty?
– Neither the paper nor the spreadsheets give individual errors for proxy times. They only give the resolution . They talk about uncertainties of the order of resolution times so I simply decided to be conservative and use 20% of the resolution. I strongly suspect that the error is greater than this.
– I have not found any estimates of what the 95% confidence intervals are for 14C dating.
The objective was to show how a conservative estimate of the proxy measurement jitter inherent in each of the 73 proxies will wash out a signal fixed in absolute time.
Thanks. In your explanation does +/-40 years represent one or two standard deviations of a normal distribution around 4000 years BP?
It would be conservative to include the 14C dating error. A little reading suggests an error of several decades. Calib 6.0.1 certainly produces a confidence interval, but Marcott may not have included that information.
http://calib.qub.ac.uk/calib/calib.html
I always mean one standard deviation. I don’t understand why climate science has introduced 90% confidence levels = two standard deviations – perhaps it is to protect model predictions made in the past. In this case however 40 years was chosen as an arbitrary conservative value just to see the effect of time jitter. The 14C dating error then would need to be included on top – probably as root mean square. Extra errors further wash out the peaks.
What needs to kept in mind is that adding a spike to the proxy data is not the same as adding a spike to the proxies. This is where people get confused.
The proxies are ocean cores or similar sitting in some repository. To truly add a spike to the proxies you would need to travel back in time and change the temperature of the earth. This would then (in theory) affect the proxies in some fashion, depending on the resolution of the proxies, how they respond regionally, including lags, gain or damping. The proxy response might also be greatly be affected by other climate factors at the time.
In other words, the spikes that you add to the proxies would have all the resolution problems that the proxies themselves have.
However, adding spikes to the proxy data is an entirely different animal. The proxy data is numbers on a sheet of paper or electronic equivalent. Now you are adding (drawing) high resolution spikes onto low resolution proxy data, with no accounting for regional affects, lag, gain, damping or confounding factors. It should be no surprise at all that these high resolution spikes jump out.
What we are seeing in action is actually a form or misdirection often used in stage magic. It fools us on the stage just as it does in science. The magician tells you he is adding spikes to the proxies, but in reality he is not. He is adding spikes to the proxy data, which is not the same as adding spikes to the proxies themselves.
It is our minds that create the confusion between what the proxies actually are and what the proxy data actually is. The proxies are ocean cores – they are real objects. The proxy data is an abstract representation of the real object. However in our minds we are so used to dealing with real objects as abstract representations that we are fooled into thinking they are one and the same.
An analogy might help better understand the problem:
Imagine for a moment that we are not dealing with temperature, but rather trying to detect planets around stars. We have before us a photographs of a star taken by a telescope on Earth. We look at this under the microscope. However, we find no planets because the telescope lacks the angular resolution to distinguish them from the star itself.
Now lets go out to the star in question and add planets around the star and take more photos with our telescope. These planets are real objects. However, it will make no difference, we still can’t see the planets. In this example we have added a spike to the proxy and it has made no difference.
Now lets add a spike to the proxy data. Instead of placing planets around the star, take the photo from the telescope and draw a picture of a planet on it. This is an example of adding a spike to the proxy data. The photo is an abstract representation of the star and its planets. Now examine the photo under a microscope and voila, the planet (spike) will now be visible.
Nancy,
I like your analogy with the telescope. The telescope has a certain spatial resolution and you cannot see any detail below that resolution. So the objective is to investigate what the resolution of the proxy data used by Marcott et al. I was checking Tamino’s analysis because he was claiming good resolution.
I did not add a spike to each proxy. I tried to simulate how each proxy would sample such a “real” spike in the data. There are indeed regional variations but for this purpose we just ignore them and assume a global 0.9C rise and fall lasting 200 years. I then just assumed a 20% random resolution uncertainty in the proxy measurement times. To see this here is a snippet of the code for the 3 spikes.
The averaging analysis carries on as before.
The Marcott paper actually states the following.
So I suspect Tamino has done what you implied above – added the spike to each proxy individually.
Excellent. You dealt with two of the uncertainties in the variables that are obvious, the temperature and the date, but I’m still uneasy about the quantum of the uncertainty, especially the time. Has Marcott really measured the proxy temperature of ONE year that has a date +/- 20 years, or a 100 year period that is +/- 150 years?
Sedimentation rates will determine the resolution but not the date, which will be affected by preservation as well as cross-correlation of laminae patterns. As you noted, temperature is one factor of several, and the temperature of the water vs temperature of the air is also a question. Seems to me that people talk of swimming and saying that this year the water is colder than it used to be, presumably refering to (nearshore) current changes and wind/weather induced shallow-deep mixing.
The temperature variation we are trying to determine is global, not regional. A little work I did on temperature variations at the time champagne was “invented” showed that even in France there was considerable regional variation in temperature profiles over the hundred and fifty year period I was looking at. Any local proxy thus had a considerable uncertainty as how it represented the larger region, let alone world.
We put a line down the mathematically derived center of the data, but I wonder if we should be putting a thick sausage down the center, and saying that within that sausage we could have almost any high frequency event.
In this sort of work we try to find an internally consistent solution. But unless we are mentally committed to a Unique Solution, we have to admit that at a certain point many, many solutions fit as well. It is the variation in equal, internally consistent solutions that drives our actual uncertainty. Monte Carlo simulations are mathematically correct in how they work, but are they actually applicable given that the historical record is just one outcome that actually happened, and (according to both sides of the argument) has internal patterns that do not repeat themselves?
Uncertainty seems to lie in these parameters:
1. Proxy resolution as to date of any given interval, i.e. date,
2. Proxy resolution as to interval proxy covers, i.e. years,
3. Proxy representation of temperature affecting creation of the element, i.e. allenones,
4. Allenone creation temperature vis-a-vis regional temperatures, and
5. Regional temperatures (from allenones) of the global temperature.
I put in “5” because if a proxy tells us at least about the region of the proxy, then we still have to go from region to global; at a minimum, we should be weight-balancing regional proxy data to determine the global picture – because we know that in the current world large regions do not represent the global picture.
All of this is not to say Marcott hasn’t done good work that is useful, but that the uncertainties in the quanta of the result look considerably larger than the “thin black line” we are discussing.
The “Sausage Trend” is closer to our Limits to Knowledge, as a pragmatist philosopher might call it, than the thin black line.
I can only agree with you. Sedimentation measurements must integrate over long time intervals spanning decades. There is no unique time for each proxy measurement, they all have a natural time average spread and each depends on location. All these factors must act to further smooth out short trem variations in climate beyond the conservative errors described above. The data are insensitive to short term climate excursions and would not have any recorded 20th century warming unless it lasts another 300 years. We are indeed left with a broad sausage trend over 10,000 years, and Marcott’s real achievement has been to identify that trend.
Dr. Best, Ive been trying to figure out precisely what it is that’s different between your original analysis and that of Tamino. You wrote, ‘In order to show this he artificially generated three 200-year long triangular peaks of amplitude 0.9C at dates 7000BC, 3000BC and 1000BC. These were added to the underlying data and processed as before. The resultant signals would have easily been detected (he claims)’
And then, ‘Lets do exactly the same thing on the individual raw proxy data and then process them in the same way as I described previously. I simply increased all proxy temperatures within 100 years of a peak by DT = 0.009*DY, where DY=(100-ABS(peak year-proxy year)). Each spike is then a rise of 0.9 deg.C over a span of 100 years, followed by a return to zero over the next 100 years. The same three dates were used for the peaks as those used by Tamino. The results are shown below for anomaly data binned in 50 year time intervals.’
It sounds as though you added the same 0.9K spike to the underlying data (ie. that it is unattenuated by smoothing and also without timing uncertainty between the various proxies), but that the distinction is you added the spikes into the proxy data itself at the resolution of that proxy data. Are you saying that this is not effectively what Tamino did? I’m actually surprised that the peaks in your reconstruction are so small. Is the difference due to the fact that the resolution is poor such that many proxies don’t happen to pick up the spike at all (such that effectively the spikes have a lower weight in the other all reconstruction)?
In that case I see the point that atomic-ghost is making at April 6, 2013 at 8:13 am, that the spikes would be more represented by various proxies if in-fact the proxies tends to capture such short temperature signals more like a running average. At the same time we know that certainly Tamino did not take into account any smoothing of proxies in his analysis, so not sure why atomic-ghost endorses his approach.
Pingback: Tamino, Nancy, And Clive | suyts space
Perhaps here is a point that needs making. And I hope it has already been done. There are a total of 73 proxies. How many data points are there per proxy?
Ultimately, how may datapoints are there?
The uncertainty is in the datapoints first. There is an assumption, not well grounded, that the uncertainty type is the same in all datapoints. Not true. Time uncertainty will definitely be different, for one. So you can’t simply lump them together and take a square root to determine “more accurately” what the chosen datapoint is.
We need Sausage Patterns, not lines.
Doug, you can download the spreadsheet with the data at http://www.sciencemag.org/content/suppl/2013/03/07/339.6124.1198.DC1
Thanks, Mike. I’ll see what I can do. Funny, at work I’d dive in for hours. At home, less so inclined.
Pingback: L’affirmation concernant le XXe siècle dans une étude portant sur 11000 ans, est sans fondement. :: RESILIENCETV
Pingback: Anomalous Warmth? Context for Comments on, Critiques of, Study in Science | Yale Climate Connections