Climate Change spiralling out of control

Last

Click on above to view full animation

This is an updated animation to make it look similar to that of Ed Hawkins Click on the image to view the full animation.

The Eemian interglacial was about 3C warmer than today. If we can learn to control CO2 levels perhaps we can avoid the next devastating glaciation due to start in a few thousand year time. We would need to keep CO2 > 400ppm !

Posted in Climate Change, Ice Ages | 6 Comments

Land Temperature Anomalies.

This post studies the methodologies and corrections that are applied to derive global land temperature anomalies. All the major groups involved use the same core station data but apply different methodologies and corrections to the raw data. Although overall results are confirmed, it is shown that up to 30% of observed warming is due to adjustments and the urban heat effect.

What are temperature anomalies?

p1If there was perfect coverage of the earth by weather stations then we could measure the average temperature of the surface and track any changes with time. Instead there is an evolving set of incomplete station measurements both in place and time.  This causes biases. Consider a 5°x5° cell which contains a 2 level plateau above a flat plain at sea level. Temperature falls by -6.5C per 1000m in height so the real temperatures at different levels would be as shown in the diagram. Therefore the correct average surface temperature for that grid would be roughly (3*20+2*14+7)/6 or about 16C. What is actually measured depends on where the sampled stations are located. Since the number of stations and their location is constantly changing with time, there is little hope of measuring any underlying trends in temperature. You might even argue that an average surface temperature, in this context, is a meaningless concept.

The mainstream answer to this problem is to use temperature anomalies instead. Anomalies are typically defined relative to monthly ‘normal’ temperatures over a 30-year period for each station. CRU use 1961-1990, GISS use 1959-1980 and NCDC use the full 20th century. Then, in a second step, these ‘normals’ are subtracted from the measured temperatures to get DT or the station ‘anomaly’ for any given month. These are averaged within each grid cell, and combined in a weighted average to derive a global temperature anomaly.  Any sampling bias has not really disappeared but has been mostly subtracted. There is still the assumption that all stations react in synchrony to warming (or cooling) within a cell. This procedure also introduces a new problem for stations without sufficient coverage in the 30-year period, invalidating some of the most valuable older stations. There are methods to avoid this based on minimizing the squares of offsets, but these rely on least squares fitting to adjust average values. The end result however changes little whichever method is used.

The situation in 1990

CRU had developed their algorithms several years before 1990 when NCDC first released their version (V1) of their station data in 1989. This contained 6000 station data many of which overlapped stations provided by CRU. I have re-calculated the yearly anomalies from V1 and compared these to the then available CRU data(Jones-1988) as shown in Figure 1. The first IPCC assessment (FAR) was written in 1990 and essentially  at that time there was no firm evidence for global warming. This explains why the conclusion of FAR was that any warming signal had yet to emerge.

Figure 1: First release of GHCN (1990) compared to Jones et al. 1988.

Figure 1: First release of GHCN (1990) compared to Jones et al. 1988.

We see that all the evidence for global warming has arisen since 1990.

Situation Today

The current consensus that land surfaces have warmed by about 1.4C since 1850 is due to two effects. Firstly the data from1990 to 2000 has consistently shown a steep rise in temperature of ~0.6C. Secondly the data before 1950 has got cooler, and this second effect is a result of data correction and homogenization.

The latest station temperature data available are those from NCDC (V3) and from Hadley/CRU (CRUTEM4.3). NCDC V3C is the ‘corrected & homogenized’ data while V3U is still the raw measurement data. I have calculated the annual temperature anomalies for all three datasets: GHCN V1, V3U (uncorrected) and V3C (corrected). The CRUTEM4 analysis is based on 5549 stations, the majority of which are identical to GHCN. In fact CRU originally provided their station data as input to GHCN V1.

Figure 2 shows the comparison between the raw(V3U) and CRUTEM4 while Figure 3 compares V1 and V3U with the corrected V3C results.

Figure 2: Comparison of V3raw with CRUTEM4.3.

Figure 2: Comparison of V3raw with CRUTEM4.3.

 

 

 

 

Figure 3 Comparison of V1, V3U and V3C global temperature anomalies. V3C are the red points

Figure 3 Comparison of V1, V3U and V3C global temperature anomalies. V3C are the red points

The uncorrected data results in ~ 25% less warming as compared to V3C. This is the origin of the increased ‘cooling’ of the past. The corrections and data homogenization have mostly affected the older data by ‘cooling’ the 19th century. There is also a small increase in recent anomalies in the corrected data. A comparison of V3C, GISS, BEST and CRUTEM4 gives a consistent picture of warming by ~0.8C on Land surfaces since 1980. Data corrections have had a relatively smaller effect since 1980.

Figure 4: Comparison of all the main temperature anomaly results

Figure 4: Comparison of all the main temperature anomaly results

Relationship of Land temperatures to Global temperatures

How can the land temperature trends be compared to ocean temperatures and global temperatures ? The dashed curves in figure 4 are smoothed by a 4-year wavelength FFT filter. We then average all these together to give one ‘universal’ trend shown as the grey curve.  The result is a net 0.8C rise since 1950 with a plateau since 2000 (the Hiatus). Let’s call this trend L.

Now we can take the ocean temperature data HadSST2 and perform the same FFT smooth giving an ocean  trend ‘O’. We can define a global trend ‘G’ based on the fractions of the earth’s surface covered by land and that covered by ocean.

G = 0.69*O + 0.31*L

Next we can simply plot G and compare it to the latest annual Hadcrut4 data.

P6

It is a perfect fit! This demonstrates an overall consistency between the ocean, land and global surface data. Next we look at other possible underlying biases.

CRUTEM3 and CRUTEM4

There are marked differences between CRUTEM3 and CRUTEM4 for 2 main reasons. Firstly CRU changed the method of calculating the global average. In CRUTEM3 they used (NH+SH)/2 but then switched to (2NH+SH)/3 for CRUTEM4. The argument for this change is that land surfaces in NH are about twice the area of those in SH. The second reason for the change moving to CRUTEM4 was the addition of ~600 new stations in arctic regions. Arctic anomalies are known to be increasing faster than elsewhere. This is also the main reason why 2005 and 2010 became slightly warmer than 1998. CRUTEM3 has the hottest year being 1998.

This process of infilling the Arctic with more data has been continued with Cowtan and Way[3] corrections to CRUTEM4. This is possible because linear (lat,lon) grid cells multiply fast near the poles.

Figure 5 Differences between CRUTEM3, CRUTEM4 and different global averaging methodologies. All results are calculated directly from the relevant station data.:

Figure 5 Differences between CRUTEM3, CRUTEM4 and different global averaging methodologies. All results are calculated directly from the relevant station data.:

To summarise

  1. ~0.06C  is due to a large increase in Arctic stations
  2. ~0.1C is simply due to the change in methodology

Arctic stations have large seasonal swings in temperature of 20-40C. Interestingly the increase in anomalies seems to occur in winter rather than in summer.

Urban Heat Effect

UHI

Fig 6: Growing cities appear to be ‘cooler’ in the past when converted to temperature anomalies

The main effect of Urban Heat Islands(UHI) on global temperature anomalies is to ‘cool’ the past. This may seem counter-intuitive but the inclusion of stations in large growing cities has introduced a small long term bias in global temperature  anomalies. To understand why this is the case, we take a hypothetical rapid urbanization causing a 1C rise compared to an urban station after ~ 1950 as shown in figure 6.

All station anomalies are calculated relative to a fixed period of  12 monthly normals e.g. 1961-1990. This is independent of any relative temperature differences. Even though a large city like Milan is up to 3C warmer than the surrounding area, it makes no difference to their relative anomalies.

Figure 7: comparison V3U(blue) V3C(red) CRUTEM4(green). Top graph shows adjustments

Figure 7: comparison V3U(blue) V3C(red) CRUTEM4(green). Top graph shows adjustments

Such differences are normalized out once the seasonal averages are subtracted. As a result ‘warm’ cities when graphed as anomalies appear to be ‘cooler’ than the surrounding areas before 1950. This is just another artifact of using anomalies rather than absolute temperatures. A good example of this effect is Sao Paolo, which grew from a small town into one of the world’s most populated cities. There are many other similar examples – Tokyo, Beijing etc.

Santiago, Chile. NCDC corrections increase 'warming' by ~1.5C compared to CRUTEM4

Santiago, Chile. NCDC corrections increase ‘warming’ by ~1.5C compared to CRUTEM4

To investigate this effect globally I simply removed the 500 stations showing the largest net anomaly increase since 1920, and then recalculated the CRUTEM4 anomalies. Many of these ‘fast warming’ stations are indeed large cities especially in developing countries. They include Beijing, Sao Paolo, Calcutta, Shanghai etc. The full list can be found here.

UHI-trend

Figure 8 The red points are the standard CRUTEM4.3 analysis on all 6250 stations. The blue points are the result of the same calculation after excluding 500 stations showing greatest warming since before 1920. These include many large growing cities .

Figure 8 The red points are the standard CRUTEM4.3 analysis on all 6250 stations. The blue points are the result of the same calculation after excluding 500 stations showing greatest warming since before 1920. These include many large growing cities .

The results show that recent anomalies are little affected by UHI. Instead it is the 19th century that has been systematically cooled by up to 0.2C. This is because large cities had mostly already warmed before 1990. Anomalies just measure DT relative to a fixed time.

Data Adjustments and Homogenization.

There are often good reasons to make corrections to a few station data. Some individual measurements are clearly wrong generating a spike. These ‘typo’ errors are easy to detect and correct manually. Data homogenization, however, is now done by automated procedures, which themselves can lead to potential biases. Homogenization will smooth out statistical variations between nearby stations and accentuate underlying trends. Systematic shifts in temperature measurements do occur for physical reasons, but detecting them automatically is risky.

  1. Station can move location causing a shift especially if altitude changes.
  2. New instrumentation may be installed.
  3. Time of Observation (TOBS) can change – this is mainly a US problem.

The NCDC algorithm is based on pairwise comparisons with nearby sites to ‘detect’ such shifts in values. However, if this process is too sensitive then small shifts will occur regularly. This seems to be the case in the 2000 cases I have looked at. These shifts are systematically more negative in the early periods. A priori there should be no reason why temperatures 100 years ago could not be as warm as today. Yet the algorithm systematically shifts down such cases. One exception is some large cities such as Tokyo where a strong warming trend is flagged as unusual. As discussed above the net effect on anomalies is still to cool the early periods.

Some examples are shown below. Vestmannaeyja is an Icelandic station. The graphs shown are green for CRUTEM4 , Red for V3C and blue for V3U.

Vestman

The algorithm has detected a ‘shift’ around 1930 lowering measured temperatures before that date.

The graphs in each figure are ordered as follows. The bottom graph shows the monthly average temperatures. The next 2 graphs are the monthly anomalies followed by the yearly anomalies. Finally the top graph shows the adjustments made to the yearly anomalies by NOAA (red) and CRU (green).

Santiago-Pudahel

Santiago, Chile. Apparent warming has been mostly generated by the NCDC adjustments themselves. Is this justified?

Most of the NOAA adjustments are simple step functions that shift a block of measurements up or down. These are the sort of corrections expected for a station move, but have been applied to all stations. The algorithm searches out unusual trends relative to nearby stations and then shifts them back to expected trends. Mostly it is the early data that gets corrected and most corrections are predominantly negative. This explains the origin of the 25% increase in net global warming for V3C compared to the raw data V3U. Some changes are justified whereas others are debatable. One example where the correction algorithm generated an apparent warming trend which was not present in the measurement data is Christchurch in New Zealand.

Christchurch

Christchurch, New Zealand. The top graph shows the shift introduced by NCDC correction algorithm. There seems to be no apparent justification for the two shifts upwards made in 1970 and 1990. Without these shifts there there is no warming trend in the data after 1950.

The pair wise algorithm looks for anomalous changes in temperature data by comparing measurements with nearby stations. Since some of the stations are also affected by UHI, there remains the suspicion of a past cooling bias also infecting the correction algorithm.

Conclusions

Measuring global warming based on temperature anomalies is a bit like deciding whether the human population is getting taller based on a random sample of height measurements. Despite this, there can be little doubt that land surfaces have warmed by about 0.8C since 1970, or that since 2000 there has been little further increase.

Data correction and homogenization as applied to station data have suppressed past temperature anomalies and slightly increased recent ones. The net result is to have increased apparent global warming since the 19th century by 25%.

The Urban Heating Effect also decreases apparent values of past temperature anomalies by ~0.2C while leaving recent values mostly unchanged. This is due to a re-normalization bias of the zero line.

CRU changed their definition of the Land global average in the transition from version 3 to 4. This and the addition of 600 new arctic sites explain the increase in recent temperature anomalies.

Data sources:

  1. NOAA GHCN V3 : https://www.ncdc.noaa.gov/ghcnm/v3.php
  2. CRUTEM4 : http://www.metoffice.gov.uk/hadobs/crutem4/data/download.html
  3. Cowtan & Way http://www-users.york.ac.uk/~kdc3/papers/coverage2013/
Posted in AGW, Climate Change, NOAA, UK Met Office | 10 Comments

The Economic Impact of Greenhouse Gas Emissions

An optimistic prediction for future climate change and impact.

Guest Post by Ken Gregory

Image licensed from Shutterstock

Image licensed from Shutterstock

Summary

High estimates of climate sensitivity to greenhouse gases assume aerosols caused a large cooling effect, which canceled some of the previous warming effect, and little or no natural climate change. Recent research indicates that the aerosol effect is much less than previously thought. The transient climate response (TCR) to greenhouse gas emissions, the warming when carbon dioxide (CO2) doubles in about 125 years, was estimated by climatologist Nicholas Lewis at 1.2 °C using an energy balance approach and the new aerosol estimates but assuming that there was no natural long-term warming nor any urban heat contamination of the temperature record. However, proxy records demonstrate that there are millennium scale natural climate cycles and numerous studies indicate that the major temperature indexes are contaminated by the urban heat island effect (UHIE) of urban development. Adjusting the energy balance calculations to account for the natural recovery from the Little Ice Age and the UHIE reduces the TCR to 0.85 °C. Equilibrium climate sensitivity is estimated at 1.02 °C.

Using the FUND integrated assessment model results, the mean estimate of the social cost of carbon on a global basis is determined to be -17.7 US$/tonne of CO2, and is extremely likely to be less than -7.7 US$/tonne of CO2. The benefits of carbon dioxide fertilization, longer growing season, greater arable land area, reduced mortality and reduced heating costs greatly exceed harmful effects of warming. The results indicate that governments should subside fossil fuels by about 18 US$/tonne of CO2, rather than impose carbon taxes.

Energy Balance Climate Sensitivity

The most important parameter in determining the economic impact of climate change is the sensitivity of the climate to greenhouse gas emissions. Climatologist Nicholas Lewis used an energy balance method to estimate the Equilibrium Climate Sensitivity (ECS) best estimate at 1.45 °C from a doubling of CO2 in the atmosphere with a likely range [17 – 83%] of 1.2 to 1.8 °C, see here1. This analysis is an update of a previous paper by Nicholas Lewis and Judith Curry here2. ECS is the global temperature change resulting from a doubling of CO2 after allowing the oceans to reach temperature equilibrium, which takes about 3000 years.

A more policy-relevant parameter is the Transient Climate Response (TCR) which is the global temperature change at the time of the CO2 doubling with CO2 increasing at 1%/year, which would take 70 years. A doubling of CO2 at the current growth rate of 0.55%/year would take 126 years. The analysis gives the TCR best estimate at 1.21 °C with a likely range [17 – 83%] of 1.05 to 1.45 °C.

Energy balance estimates of ECS and TCR use these equations:

TCR = F_{2 \times CO2} \times \frac{\Delta T}{\Delta F}       ECS = F_{2 \times CO2} \times \frac{\Delta T}{\Delta F - \Delta Q}  ,

where F2xCO2 is the forcing from a doubling of CO2, estimated at 3.71 W/m2. \Delta T is the change in global average temperature between two periods, \Delta F is the change in forcing between the two periods, and \Delta Q is the top-of-atmosphere radiative imbalance, which is the rate of heat uptake of the climate system. The oceans account for over 90% of the climate system heat uptake. The two periods used for the analysis were 1859-1882 and 1995-2011. They were chosen to give the longest early and late periods free of significant volcanic activity, which provide the largest change in forcing and hence the narrowest uncertainty ranges. The long time between these periods has the effect of averaging out the effect of short-term ocean oscillations such as the Atlantic Multi-decadal Oscillation (AMO) and the Pacific Decadal Oscillation (PDO), but it does not account for millennium scale ocean oscillations or indirect solar influences.

The 5th assessment report (AR5) of the Intergovernmental Panel on Climate Change (IPCC) gave a best estimate of aerosol forcing of -0.9 W/m2 (for 2011 vs 1750) with a 5 to 95% uncertainty range of -1.9 to -0.1 W/m2 [WG1 AR5 page 571]. Aerosols have a direct effect and an indirect effect from aerosol-cloud interactions, both of which are estimated to cause cooling. Aerosols are the dominant contribution to uncertainty in climate sensitivity estimates. The AR5 has substantially reduced this uncertainty compared to AR4, but this reduced uncertainty was not available soon enough to be incorporated into the climate models used in AR5. Consequently, those models used large aerosol cooling to offset greenhouse gas warming in the historical period, and assumes aerosol cooling will decline in the future. This allows climate models to have high sensitivity to greenhouse gases while still roughly matching the historic temperature record. Aerosol forcing depends strongly on very uncertain estimates of the level of preindustrial aerosols.

Nicholas Lewis writes, “In this context, what is IMO a compelling new paper3 by Bjorn Stevens estimating aerosol forcing using multiple physically-based, observationally-constrained approaches is a game changer.” Stevens is an expert on cloud-aerosol processes. He derived a new lower estimate of aerosol forcing of -1.0 W/m2. The new aerosol forcing best estimate from 1750 is -0.5 W/m2 with a 5 to 95% uncertainty range of -1.0 to -0.3 W/m2.

Lewis used this estimate for aerosol forcing and used estimate of other forcings given in AR5 here4. Ocean heat content is from Box 3.1, Figure 1 of AR5. The likely 83% upper bound of ECS was reported by the IPCC in AR5 at 4.5 °C, but this drops to 2.45 °C when calculated with the AR5 reported forcings, and drops to only 1.8 °C when substituting the Stevens estimate of aerosol forcing. The IPCC did not provide a 95% upper estimate of ECS, but estimates the 90% upper limit at 6 °C. The upper 95% limit dropped dramatically from 4.05 °C using AR5 forcing to only 2.2 °C when using the new Stevens aerosol forcing estimates. In terms of TCR, using the Stevens aerosol forcing causes the upper 95% limit to be reduced from 2.5 °C to 1.65 °C.

According to HadCRUT4.4, the temperature change between the two periods (1859-1882 and 1995-2011) was 0.72 °C. Using the equations for TCR and ECS, the total forcing change \Delta F during the interval was 2.21 W/m2 and the heat uptake \Delta Q was 0.365 W/m2.

Adjustment for Millennium Cyclic Warming

This analysis by Lewis does not account for the long-term natural warming from the Little Ice Age (LIA), likely driven by indirect solar activity. The temperature history shows an obvious millennium scale temperature oscillation, indicating that natural climate change accounts for a significant portion of the temperature recovery since the LIA. Climatologist Dr. Richard Lindzen writes, “Lewis does not take account of natural variability, and, I suspect, his estimates are high.”

Fredrik Ljungqvist prepared a temperature reconstruction of the Extra-Tropical Northern Hemisphere (ETNH) during the last two millennia with decadal resolution [Ljungqvist 2010] here5. The results are shown in Figure 1.

Human-caused greenhouse gas emissions did not cause significant temperature change to the year 1900 because cumulative CO2 emissions to 1900 were insignificant. The approximate temperature trends during each of the periods identified in figure 1 were estimated. The average of the absolute natural temperature change over the four periods was 0.095 °C/century, as shown in Table 1 and Figure 2.

Figure 1. Extra-tropical Northern Hemisphere temperatures utilizing many palaeo-temperature proxy records, adapted from Ljungqvist 2010. The shading represents 2 standard deviation errors. RWP = Roman Warm Period AD 1-300; DACP = Dark Age Cold Period 300-900; MWP = Medieval Warm Period 800-1300; LIA = Little Ice Age 1300-1900; CWP = Current Warm Period 1900-now.Figure 1. Extra-tropical Northern Hemisphere temperatures utilizing many palaeo-temperature proxy records, adapted from Ljungqvist 2010. The shading represents 2 standard deviation errors. RWP = Roman Warm Period AD 1-300; DACP = Dark Age Cold Period 300-900; MWP = Medieval Warm Period 800-1300; LIA = Little Ice Age 1300-1900; CWP = Current Warm Period 1900-now.

Figure 1. Extra-tropical Northern Hemisphere temperatures utilizing many palaeo-temperature proxy records, adapted from Ljungqvist 2010. The shading represents 2 standard deviation errors. RWP = Roman Warm Period AD 1-300; DACP = Dark Age Cold Period 300-900; MWP = Medieval Warm Period 800-1300; LIA = Little Ice Age 1300-1900; CWP = Current Warm Period 1900-now.Figure 1. Extra-tropical Northern Hemisphere temperatures utilizing many palaeo-temperature proxy records, adapted from Ljungqvist 2010. The shading represents 2 standard deviation errors. RWP = Roman Warm Period AD 1-300; DACP = Dark Age Cold Period 300-900; MWP = Medieval Warm Period 800-1300; LIA = Little Ice Age 1300-1900; CWP = Current Warm Period 1900-now.

 

Table 1 – Temperature Change of ETNH Over 2 Millennium
Begin End Change Period absolute
°C °C °C centuries °C/century
RWP-DACP 0.06 -0.37 -0.43 4.0 0.108
DACP-MWP -0.37 0.01 0.38 5.2 0.073
MWP-LIA 0.01 -0.55 -0.56 6.1 0.092
LIA-1900 -0.55 -0.25 0.30 2.8 0.107
Average 0.095

 

The warming from 1859 to date of the ETNH attributable to natural climate change is deemed to be the average of the absolute temperature changes during each of the periods identified in Table 1

Figure 2. Extra-tropical Northern Hemisphere temperature change with a 6th order polynomial fit and line segments.

Figure 2. Extra-tropical Northern Hemisphere temperature change with a 6th order polynomial fit and line segments.

The Ljungqvist 2010 paper gives several reasons why the reconstruction likely “seriously underestimates” the temperature variability but does not make any corrections to his reconstruction. The author assumed a linear relationship between the temperature and the proxy, but the proxy response is often non-linear. The tree-ring proxies are biased toward the summer growing season. If the Little Ice Age cooling was more pronounced during winter months the annual estimate would be biased too warm. The large dating uncertainties of the sediment proxies has the effect of “flattening out” the temperatures so the true magnitude of the warm and cold periods are underestimated.

The proxy temperature did not rise as sharply during the 20th century as the thermometer record did, indicating the instrument temperature record is biased high due to the uncorrected urban heat island effect (UHIE) and/or underestimated reconstructed temperature variations from the proxies.

The Ljungqvist reconstruction will be adjusted here to account for the summer tree ring bias and the “flattening out” effect of the sediment proxies.

Adjustment for Summer Tree-ring Bias

The growing season in the ETNH is assumed to be from May through September. The Global Historical Climate Network (GHCN) CAMS gridded 2m analysis shows that the July temperatures are 29 °C warmer than the January temperatures, average over 2005-2015.

The annual temperature were compared to the weighted average of the growing season months during two decades of the coldest part of the record, 1960 to 1979 and the warmest part to the record, 1995 to 2014, to determine the seasonal growing bias. The weighting factors were the taken from an analysis of tree growth in Oregon, USA here, Figure 5. The tree growth rate relative to June are shown in Table 2.

Table 2 – Tree Growth Rates Factors by Month
May June July August September
Growth rate relative to June 0.75 1 0.7 0.35 0.18

 

The actual annual and weighted monthly growing season temperature history over land in the ETNH is given in Figure 3.

Figure 3. Actual annual and weighted average May through September temperatures of the extra-tropical Northern Hemisphere (30 - 90°N). The annual temperatures are indicated by the right vertical axis and the May - September growing season temperatures are indicated by the left vertical axis.

Figure 3. Actual annual and weighted average May through September temperatures of the extra-tropical Northern Hemisphere (30 – 90°N). The annual temperatures are indicated by the right vertical axis and the May – September growing season temperatures are indicated by the left vertical axis.

The annual and tree growth rate weighted average growing season temperatures during two cold decades and two warm decades are given in Table 3.

 

Table 3 – Growing Season Bias of Tree-ring Proxies
Annual °C Growing Season °C Annual/Growing Season
1960 – 1979 2.25 13.42
1995 – 2014 3.40 14.35
Difference 1.15 0.93 123%

 

The annual temperatures show more change than the tree growing season temperatures indicating that the tree-ring proxies underestimate the temperature variability. Assuming that the seasonal temperature variability over the last century is similar to that over the last two millennia, the table indicates that tree-ring proxy temperature variability should be increased by 23%. Eight of the 30 proxies have this tree-ring seasonal bias.

In addition to the tree-ring proxies, Ljungqvist identified 11 non-tree-ring proxies that are also subject to the summer seasonal bias. These proxies likely also underestimate the annual temperature variations and the Little Ice Age cooling, but no adjustment for them is made.

            Adjustment for Sediments Dating Uncertainty

The dating uncertainty of sediment proxies are typically +/- 160 years. Ljungqvist writes, “The dating uncertainty of proxy records very likely results in “flattening out” the values from the same climate event over several hundred years … so they are unable to capture the true magnitude of the cold and warm periods in the reconstruction.”

The actual decadal temperature variation is assumed to be some factor greater than the reconstruction variation. The smoothed sediment temperatures are assumed to be an average of all the actual temperatures from 50 year earlier to 50 year after the recorded time. A model of the reconstruction was created as a weighted average of the smoothed temperatures of the 12 sediment proxies and the actual temperatures of the 18 non-sediment proxies. The factor of 1.12 was choose such that the difference between the model and the reconstruction summed over 50 years centered on each of the MWP maximum and the LIA minimum, is minimized. The result of this is shown in Figure 4.

Figure 4. Estimated Effect of Sediment proxy smoothing due to dating uncertainty. The actual temperature variation was estimated at 1.12 times the Ljungqvist reconstruction variation about the mean temperature of the MWP and the LIA extremes.

Figure 4. Estimated Effect of Sediment proxy smoothing due to dating uncertainty. The actual temperature variation was estimated at 1.12 times the Ljungqvist reconstruction variation about the mean temperature of the MWP and the LIA extremes.

Total proxy adjustment

The weighted average proxy bias adjustment is shown in Table 5.

Table 5 – Proxy Bias Adjustments
Type Number Proxies Bias Adjustment Factor
Tree-ring season bias 8 122.8 %
Sediment smoothing bias 12 112 %
Other proxies 10 100 %
Total 30 110.9 %

            Adjustment for Global Temperatures

The southern hemisphere and tropics temperature variability is less than the northern extra-tropics due to the larger ocean area, so the global temperature variations over the last 2000 years would be less than the northern exotropics. Ideally we should use a temperature reconstruction for the entire globe, but there are too few proxies for the tropics and southern hemisphere. Table 4 shows the temperature for the ETNH and the globe for the coolest and warmest two-decade period as recorded by HadCRUT4.4. The table indicates that the global temperatures vary by only 80% of the ETNH. It is assumed that this holds true during the last two millennia.

Table 4 – Global and Extra-tropical Northern Hemisphere Temperature Variations
Global °C ETNH °C Global/ETNH
1900 – 1919 -0.392 -0.280
1990 – 2009 0.396 0.667
Change 0.761 0.948 80.3%        

        

Total Millennium Cycle Adjustment

The global natural recovery from the LIA from 1859 is determined by the average temperature change rate over the four segments of the last two millennia in the ETNH, adjusted for the tree-ring seasonal bias and the sediment smoothing bias, and converted to global values.

The global natural recovery from the LIA is estimated at 0.084 °C/century, which is the product of the 0.095 °C/century for the ETNH, the proxy bias of 110.9 % and the global adjustment of 80.3%. Note that this doesn’t include the seasonal biases of the sediment proxies, so it might underestimate the actual natural warming. The temperature change over the 1.33 centuries between the midpoints of the two periods used in the climate sensitivity analysis is reduced from 0.72 °C by the 0.11 °C natural warming to 0.61 °C of anthropogenic warming.

The best estimate of ECS is reduced to 1.22 °C [likely range 0.95 – 1.55 °C] and the best estimate of TCR is reducd to be 1.02 °C [likely range 0.85 – 1.25 °C]. These estimate do not include an adjustment for the UHIE. These uncertainty ranges have the same spread as determined by Lewis and do not include additional uncertainty due to the millennium warming cycle.

 

Adjustment for the Urban Heat Island Effect

Numerous papers have shown that the UHIE contaminates the instrument temperature record. A study by McKitrick and Michaels 2007, summary here 6, showed that almost half of the warming over land in instrument data sets is due to the UHIE. A study by Laat and Maurellis 2006 came to identical conclusions. Note that the IPCC dismissed the overwhelming evidence of UHIE contamination by falsely claiming “the locations of greatest socioeconomic development are also those that have been most warmed by atmospheric circulation changes”. That is, our cities were built where there is the most natural warming, a nonsensical claim. Climate models do not show such correlation which refutes the claim.

 

A study by Watts et al presented at the AGU fall meeting 2015 showed that bad siting of temperature stations has resulted in NOAA overestimating US warming trends by 59% since 1979. The HadCRUT4 analysis does not have a specific correction of the UHIE. The GISS temperature dataset uses an UHIE adjustment routine that applies a temperature trend change in the wrong direction in 45% of the adjustments. Instead of eliminating or reducing the urbanization effects, these “wrong way” corrections make the urban warming trends steeper, as shown here7.

The McKitrick and Michaels 2007 result show that the HadCRUT temperature index trend from 1979 to 2002 over land would decline from 0.27 °C/decade to 0.13 °C/decade. The UHIE over land is about 0.14 °C/decade, or 0.042 °C/decade on a global basis since 1979. The UHIE correction over the period 1980 to 2008 is 0.10 °C. This reduced the temperature change due to greenhouse gas forcings to 0.51 °C over the two periods 1859-1882 and 1995-2011 used in the analysis. We assume no UHIE before 1980 and the UHIE warming trend continues to 2011.

The best estimate of ECS is 1.02 °C and the best estimate of TCR is 0.85 °C.

 

Summary of Climate Sensitivity Estimates

Table 6 summarizes the ECS and the TCR best estimate, likely and extremely likely confidence intervals for 5 cases. All forcing-based estimates use initial and final periods of 1859-1882 and 1995-2011, respectively. Ranges are to the nearest 0.05°C.

Table 6 – Estimates of Equilibrium Climate Sensitivity and Transient Climate Response with Uncertainty Ranges.
ECS Best Estimate ECS 17-83% range °C ECS 5-95% range °C TCR Best Estimate TCR 17-83% range °C TCR 5-95% range °C
IPCC AR5 n/a 1.5-4.5 1-n/a 1.8 1-2.5 n/a-3.0
Using AR5 Forcings 1.64 1.25-2.45 1.05-4.05 1.33 1.05-1.80 0.90-2.50
As above but with Stevens’ Aerosol Forcing 1.45 1.20-1.80 1.05-2.20 1.21 1.05-1.45 0.90-1.65
As above but with Natural Millennium Warming 1.22 0.95-1.55 0.80-1.95 1.02 0.85-1.25 0.70-1.45
As above but with UHIE Correction 1.02 0.75-1.35 0.60-1.75 0.85 0.70-1.10 0.55-1.30

 

The best estimate TCR of 0.85 °C implies that the global temperature will increase from 2016 to 2100 due to anthropogenic CO2 emissions by only 0.57 °C if atmospheric CO2 continues to increase at the current rate of 0.55%/year. Actual temperatures may rise or fall depending on natural climate change.

Note the discrepancy between the ECS upper 83% limit of 4.5 °C as reported in AR5 versus the calculated upper limit of 2.45 °C using the AR5 reported forcings and heat uptake estimates from empirical measurements. The AR5 reported value is based on “expert judgment” using clues from the paleoclimate record, from climate model outputs and observation based studies. Climate sensitivity estimates based on the paleoclimate record assumes the only natural forcing is from tiny changes in the total solar irradiance, however it is extremely likely that indirect solar effects due to changes in solar ultraviolet intensity and the solar magnetic field have a much greater effect on climate, so these estimates have no value. The climate model estimates only reflect the modelers’ biases and guesses of how aerosols, clouds and upper atmosphere water vapor will change in response to warming. Modeler observe that the amount of clouds have generally declined with warming during the 20th century, and assumed that the cloud decline was caused by the warming, interpreting the change as a positive cloud feedback. But detailed analysis of clouds show that the amount of clouds declined due to natural causes that allowed more sunlight in to warm the planet. Radiosonde and satellite measurements show that upper atmosphere water vapour declines with warming, but climate models predict the opposite behaviour.

The scientific method requires that when theory conflicts with empirical measurements, the theory should be modified to agree with the measurements. The IPCC falsely treats computer model output as evidence of climate sensitivity. Climate sensitivity estimates should be based only on observational studies during the instrument period, and climate models should be changed to agree with the lower observation based climate sensitivity estimates.

 

Social Cost of Carbon

The US Government’s Interagency Working Group (IWG) on Social Cost of Carbon (SCC) uses three Integrated Assessment Models (IAM) to determine the social costs and benefits of greenhouse gas emissions. Two of these models, DICE and PAGE, do not include the benefits of CO2 fertilization and other benefits of warming, and fail to account for adaptation.

The FUND model does include these benefits, but arguably underestimates the benefits of CO2 fertilization. Idso (2013) found that the increase in the atmospheric concentration of carbon dioxide that took place during the period 1961-2011 was responsible for increasing global agricultural output by $3.2 trillion (in constant 2005 US$). According to written testimony for the House of Representatives Committee on Natural Resources by Dr. Patrick Michaels here8, the FUND model may have underestimated the CO2 fertilization effect by a factor of 2 to 3.

The IWG acknowledges that the three IAMs treat CO2 fertilization very differently, but they claim the IAMs were selected “to reflect a reasonable range of modeling choices and approaches that collectively reflect the current literature on the estimation of damages from CO2 emissions.” Michaels wrote here, “This logic is blatantly flawed. Two of the IAMs do not reflect the “current literature” on a key aspect relating to the direct impact of CO2 emissions on agricultural output, and the third only partially so. CO2 fertilization is a known physical effect from increased carbon dioxide concentrations. By including the results of IAMs that do not include known processes that have a significant impact on the end product must disqualify them from contributing to the final result. …. Results should only be included when they attempt to represent known processes, not when they leave those processes out entirely.”

 

The DICE model assumes that the optimum climate for humanity was the pre-industrial climate of 1900, which was near the end of the Little Ice Age, and any temperature increase since then is harmful. Testimony by Dr. Mendelsohn to the Minnesota Public Utilities Commission here9 shows that there is no evidence that the temperature increase since 1900 caused any damages, and such damages would be easily detectable. He suggests that net damages would not occur until temperatures are 1.5 to 2.0 °C about pre-industrial levels, equal to 0.7 to 1.2 °C above current temperatures. This model does not include benefits of warming.

Heat and cold related mortality is a major component of the SSC. An international study analyzing over 74 million deaths across 13 countries found that cold weather kills 20 times as many people as hot weather. Statistics Canada 2007-2011 data shows that there are Canadian death rate in January is 100 deaths/year greater than in August, as shown here10. Clearly the optimum temperature is greater than current temperatures.

The DICE model produces future sea level rise values that far exceed mainstream projections and exceed even the highest end of the projected sea level rise by the year 2300 as reported in the AR5 report.

Dr. Robert Mendelsohn testified here9 that the PAGE model “not well grounded in economic theory” and it uses an “uncalibrated probabilistic damage function”. Mendelsohn says here11, “The version of the PAGE model used by the IWG explicitly does not include adaptation. Failing to include adaptation vastly overstates the damage that climate change will cause.” For these reasons this report uses only the FUND model to determine the SCC.

The FUND model was developed by Dr. Richard Tol, Professor of Economics at the University of Sussex, UK. The FUND model shows that Canada benefits from emissions by 1.9% of gross domestic product by 2100, equivalent to a benefit of $109 Billion annually in 2015 dollars when assuming an ECS of 3 °C. Anthropogenic climate change will have only positive impacts in Canada which increase throughout the 21st century.12

Figure 5 shows the SCC (blue line) as a function of ECS. The ECS best estimate is indicated by the red square. The thick red line shows the 17-83% probability range, and the thin red line shows the 5-95% probability range of the ECS estimate. The FUND model values were provided by Dr. Richard Tol in testimony to the Minnesota Public Utilities Commission, Table 3, here13. The SCC values assume a real discount rate of 3%.

Figure 5. The equilibrium climate sensitivity (ECS) as calculated by N. Lewis using aerosol forcing by Stevens, other forcings and heat uptake by IPCC AR5 and global surface temperatures adjusted to account for natural millennium cyclic warming and urban warming from 1980. The ECS best estimate is shown by the red square, uncertainty ranges by the red lines. Social cost of carbon as determined by the FUND integrated assessment model is shown by the blue line.

Figure 5. The equilibrium climate sensitivity (ECS) as calculated by N. Lewis using aerosol forcing by Stevens, other forcings and heat uptake by IPCC AR5 and global surface temperatures adjusted to account for natural millennium cyclic warming and urban warming from 1980. The ECS best estimate is shown by the red square, uncertainty ranges by the red lines. Social cost of carbon as determined by the FUND integrated assessment model is shown by the blue line.

Projecting the ESC values vertically on the blue SSC vs ECS curve gives the best estimate and confidence intervals of the SCC, as indicated in Figure 6.

The analysis shows that on a global basis, the best estimate of ECS of 1.02 °C, gives a SCC of -17.7 US$/tCO2, which is very beneficial. The likely range is  -19.7 to  -13.6 US$/tCO2, and it is extremely likely to be less than -7.7 US$/tCO2. These results show that instead of imposing a carbon tax on fossil fuels, there should be a subsidy equal to about 18 US$/tCO2.

The benefits of CO2 fertilization, reduced cold weather related mortality, lower outdoor industry costs such as construction costs, increased arable land area and reduced heating costs greatly exceed harmful effects of warming on a global basis.

Figure 6. Social Cost of Carbon in US$/tCO2 indicating best estimate, likely 17-83%, and extremely likely 5-95% uncertainty ranges. The uncertainty ranges do not include uncertainty associated with the millennium warming cycle or the urban warming effect.

Figure 6. Social Cost of Carbon in US$/tCO2 indicating best estimate, likely 17-83%, and extremely likely 5-95% uncertainty ranges. The uncertainty ranges do not include uncertainty associated with the millennium warming cycle or the urban warming effect.

The social cost of carbon as determined by IAMs requires numerous assumptions that are not based on science or economics. It depends on assumption of choices future consumers, voters and politicians make many decades and centuries into the future. The SCC in part assumes governments fail to take appropriate action to mitigate flooding due to sea level rise.

Dr. Tol explains, “the causal chain from carbon dioxide emission to social cost of carbon is long, complex and contingent on human decisions that are at least partly unrelated to climate

policy. The social cost of carbon is, at least in part, also the social cost of underinvestment in

infectious disease, the social cost of institutional failure in coastal countries, and so on.” 

In Conclusion

The climate is much less sensitive to greenhouse gas emissions than is commonly believed. The expected climate change at the time of doubling the CO2 concentration is about 0.85 °C, which at current CO2 growth rates would take about 125 years. The social cost of carbon, is likely in the range of -20 to -14 US$/tCO2, with a best estimate of -18 US$/tCO2. The benefits of CO2 fertilization and warming are much greater than the harmful effects of warming. Emissions are very beneficial.

 

References

  1. The Implications for Climate Sensitivity of Bjorn Stevens’ New Aerosol Forcing Paper, by Nicholas Lewis, Climate Audit, March 2015. http://climateaudit.org/2015/03/19/the-implications-for-climate-sensitivity-of-bjorn-stevens-new-aerosol-forcing-paper/
  2. The implications for climate sensitivity of AR5 forcing and heat uptake estimates, by Nicholas Lewis and Judith Curry, Climate Dynamics, September 2014. https://niclewis.wordpress.com/the-implications-for-climate-sensitivity-of-ar5-forcing-and-heat-uptake-estimates/
  3. Rethinking the Lower Bound on Aerosol Radiative Forcing, by Bjorn Stevens, Journal of Climate, June 2015. http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-14-00656.1
  4. IPCC, WG1, Annex II: Climate System Scenario Tables, 2013. http://www.ipcc.ch/pdf/assessment-report/ar5/wg1/WG1AR5_AnnexII_FINAL.pdf
  5. A New Reconstruction of Temperature Variability in the Extra-tropical Northern Hemisphere During the Last Two Millennia, by Fredrik Ljungqvist, Geografiska Annaler Series, 2010. http://agbjarn.blog.is/users/fa/agbjarn/files/ljungquist-temp-reconstruction-2000-years.pdf
  6. Background Discussion On: Quantifying the Influence of Anthropogenic Surface Processes and Inhomogeneities on Gridded Global Climate Data, by Ross McKitrick and Patrick Michaels, Journal of Geophysical Research-atmospheres, December 2007. http://www.rossmckitrick.com/uploads/4/8/0/8/4808045/m_m.jgr07-background.pdf
  7. Correct the Corrections: The GISS Urban Adjustment, by Ken Gregory, Friends of Science, June 2008. http://www.friendsofscience.org/index.php?id=396
  8. An Analysis of the Obama Administration’s Social Cost of Carbon, by Dr. Patrick Michaels, U.S. House of Representatives Committee on Natural Resources, July 2015. http://www.friendsofscience.org/index.php?id=2153
  9. Rebuttal Testimony on Socioeconomic Costs, Minnesota Public Utilities Commission, by Robert Mendelsohn, August 2015. http://www.friendsofscience.org/assets/files/Mendelsohn_Rebuttal_20158-113190-05.pdf
  10. Winters not Summers Increase Mortality and Stress the Economy, By Joseph D’Aleo and Allan MacRae, May 2015. http://www.friendsofscience.org/index.php?id=2132
  11. Sur-Rebuttal Testimony on Socioeconomic Costs, Minnesota Public Utilities Commission, by Robert Mendelsohn, September 2015. http://www.friendsofscience.org/assets/files/Mendelsohn_Sur-Rebuttal_20159-113912-05.pdf
  12. Climate Economics, Edward Elgar Publishing Limited, by Richard Tol, 2014.
  13. Rebuttal Testimony on Socioeconomic Costs, Minnesota Public Utilities Commission, by Richard Tol, August 1015. https://www.edockets.state.mn.us/EFiling/edockets/searchDocuments.do?method=showPoup&documentId=%7bA39ADC16-205E-44D3-B080-BC19501F3247%7d&documentTitle=20158-113190-07

 

The data and calculations are at;

http://www.friendsofscience.org/assets/files/SCC_Lewis_CS_2.xls Excel spreadsheet.

This article, with a section on Alberta’s climate change plan, is available in PDF format at; http://www.friendsofscience.org/index.php?id=2205

A shorter, non-technical version of this article is at;

http://www.friendsofscience.org/assets/documents/AB_carbon_tax_Economic_Impact_Gregory_Summary.pdf

Posted in AGW, Climate Change | Tagged | 34 Comments