# The Economic Impact of Greenhouse Gas Emissions

An optimistic prediction for future climate change and impact.

Guest Post by Ken Gregory

Summary

High estimates of climate sensitivity to greenhouse gases assume aerosols caused a large cooling effect, which canceled some of the previous warming effect, and little or no natural climate change. Recent research indicates that the aerosol effect is much less than previously thought. The transient climate response (TCR) to greenhouse gas emissions, the warming when carbon dioxide (CO2) doubles in about 125 years, was estimated by climatologist Nicholas Lewis at 1.2 °C using an energy balance approach and the new aerosol estimates but assuming that there was no natural long-term warming nor any urban heat contamination of the temperature record. However, proxy records demonstrate that there are millennium scale natural climate cycles and numerous studies indicate that the major temperature indexes are contaminated by the urban heat island effect (UHIE) of urban development. Adjusting the energy balance calculations to account for the natural recovery from the Little Ice Age and the UHIE reduces the TCR to 0.85 °C. Equilibrium climate sensitivity is estimated at 1.02 °C.

Using the FUND integrated assessment model results, the mean estimate of the social cost of carbon on a global basis is determined to be -17.7 US$/tonne of CO2, and is extremely likely to be less than -7.7 US$/tonne of CO2. The benefits of carbon dioxide fertilization, longer growing season, greater arable land area, reduced mortality and reduced heating costs greatly exceed harmful effects of warming. The results indicate that governments should subside fossil fuels by about 18 US$/tonne of CO2, rather than impose carbon taxes. Energy Balance Climate Sensitivity The most important parameter in determining the economic impact of climate change is the sensitivity of the climate to greenhouse gas emissions. Climatologist Nicholas Lewis used an energy balance method to estimate the Equilibrium Climate Sensitivity (ECS) best estimate at 1.45 °C from a doubling of CO2 in the atmosphere with a likely range [17 – 83%] of 1.2 to 1.8 °C, see here1. This analysis is an update of a previous paper by Nicholas Lewis and Judith Curry here2. ECS is the global temperature change resulting from a doubling of CO2 after allowing the oceans to reach temperature equilibrium, which takes about 3000 years. A more policy-relevant parameter is the Transient Climate Response (TCR) which is the global temperature change at the time of the CO2 doubling with CO2 increasing at 1%/year, which would take 70 years. A doubling of CO2 at the current growth rate of 0.55%/year would take 126 years. The analysis gives the TCR best estimate at 1.21 °C with a likely range [17 – 83%] of 1.05 to 1.45 °C. Energy balance estimates of ECS and TCR use these equations: $TCR = F_{2 \times CO2} \times \frac{\Delta T}{\Delta F}$ $ECS = F_{2 \times CO2} \times \frac{\Delta T}{\Delta F - \Delta Q}$ , where F2xCO2 is the forcing from a doubling of CO2, estimated at 3.71 W/m2. $\Delta T$ is the change in global average temperature between two periods, $\Delta F$ is the change in forcing between the two periods, and $\Delta Q$ is the top-of-atmosphere radiative imbalance, which is the rate of heat uptake of the climate system. The oceans account for over 90% of the climate system heat uptake. The two periods used for the analysis were 1859-1882 and 1995-2011. They were chosen to give the longest early and late periods free of significant volcanic activity, which provide the largest change in forcing and hence the narrowest uncertainty ranges. The long time between these periods has the effect of averaging out the effect of short-term ocean oscillations such as the Atlantic Multi-decadal Oscillation (AMO) and the Pacific Decadal Oscillation (PDO), but it does not account for millennium scale ocean oscillations or indirect solar influences. The 5th assessment report (AR5) of the Intergovernmental Panel on Climate Change (IPCC) gave a best estimate of aerosol forcing of -0.9 W/m2 (for 2011 vs 1750) with a 5 to 95% uncertainty range of -1.9 to -0.1 W/m2 [WG1 AR5 page 571]. Aerosols have a direct effect and an indirect effect from aerosol-cloud interactions, both of which are estimated to cause cooling. Aerosols are the dominant contribution to uncertainty in climate sensitivity estimates. The AR5 has substantially reduced this uncertainty compared to AR4, but this reduced uncertainty was not available soon enough to be incorporated into the climate models used in AR5. Consequently, those models used large aerosol cooling to offset greenhouse gas warming in the historical period, and assumes aerosol cooling will decline in the future. This allows climate models to have high sensitivity to greenhouse gases while still roughly matching the historic temperature record. Aerosol forcing depends strongly on very uncertain estimates of the level of preindustrial aerosols. Nicholas Lewis writes, “In this context, what is IMO a compelling new paper3 by Bjorn Stevens estimating aerosol forcing using multiple physically-based, observationally-constrained approaches is a game changer.” Stevens is an expert on cloud-aerosol processes. He derived a new lower estimate of aerosol forcing of -1.0 W/m2. The new aerosol forcing best estimate from 1750 is -0.5 W/m2 with a 5 to 95% uncertainty range of -1.0 to -0.3 W/m2. Lewis used this estimate for aerosol forcing and used estimate of other forcings given in AR5 here4. Ocean heat content is from Box 3.1, Figure 1 of AR5. The likely 83% upper bound of ECS was reported by the IPCC in AR5 at 4.5 °C, but this drops to 2.45 °C when calculated with the AR5 reported forcings, and drops to only 1.8 °C when substituting the Stevens estimate of aerosol forcing. The IPCC did not provide a 95% upper estimate of ECS, but estimates the 90% upper limit at 6 °C. The upper 95% limit dropped dramatically from 4.05 °C using AR5 forcing to only 2.2 °C when using the new Stevens aerosol forcing estimates. In terms of TCR, using the Stevens aerosol forcing causes the upper 95% limit to be reduced from 2.5 °C to 1.65 °C. According to HadCRUT4.4, the temperature change between the two periods (1859-1882 and 1995-2011) was 0.72 °C. Using the equations for TCR and ECS, the total forcing change $\Delta F$ during the interval was 2.21 W/m2 and the heat uptake $\Delta Q$ was 0.365 W/m2. Adjustment for Millennium Cyclic Warming This analysis by Lewis does not account for the long-term natural warming from the Little Ice Age (LIA), likely driven by indirect solar activity. The temperature history shows an obvious millennium scale temperature oscillation, indicating that natural climate change accounts for a significant portion of the temperature recovery since the LIA. Climatologist Dr. Richard Lindzen writes, “Lewis does not take account of natural variability, and, I suspect, his estimates are high.” Fredrik Ljungqvist prepared a temperature reconstruction of the Extra-Tropical Northern Hemisphere (ETNH) during the last two millennia with decadal resolution [Ljungqvist 2010] here5. The results are shown in Figure 1. Human-caused greenhouse gas emissions did not cause significant temperature change to the year 1900 because cumulative CO2 emissions to 1900 were insignificant. The approximate temperature trends during each of the periods identified in figure 1 were estimated. The average of the absolute natural temperature change over the four periods was 0.095 °C/century, as shown in Table 1 and Figure 2. Figure 1. Extra-tropical Northern Hemisphere temperatures utilizing many palaeo-temperature proxy records, adapted from Ljungqvist 2010. The shading represents 2 standard deviation errors. RWP = Roman Warm Period AD 1-300; DACP = Dark Age Cold Period 300-900; MWP = Medieval Warm Period 800-1300; LIA = Little Ice Age 1300-1900; CWP = Current Warm Period 1900-now.Figure 1. Extra-tropical Northern Hemisphere temperatures utilizing many palaeo-temperature proxy records, adapted from Ljungqvist 2010. The shading represents 2 standard deviation errors. RWP = Roman Warm Period AD 1-300; DACP = Dark Age Cold Period 300-900; MWP = Medieval Warm Period 800-1300; LIA = Little Ice Age 1300-1900; CWP = Current Warm Period 1900-now.  Table 1 – Temperature Change of ETNH Over 2 Millennium Begin End Change Period absolute °C °C °C centuries °C/century RWP-DACP 0.06 -0.37 -0.43 4.0 0.108 DACP-MWP -0.37 0.01 0.38 5.2 0.073 MWP-LIA 0.01 -0.55 -0.56 6.1 0.092 LIA-1900 -0.55 -0.25 0.30 2.8 0.107 Average 0.095 The warming from 1859 to date of the ETNH attributable to natural climate change is deemed to be the average of the absolute temperature changes during each of the periods identified in Table 1 Figure 2. Extra-tropical Northern Hemisphere temperature change with a 6th order polynomial fit and line segments. The Ljungqvist 2010 paper gives several reasons why the reconstruction likely “seriously underestimates” the temperature variability but does not make any corrections to his reconstruction. The author assumed a linear relationship between the temperature and the proxy, but the proxy response is often non-linear. The tree-ring proxies are biased toward the summer growing season. If the Little Ice Age cooling was more pronounced during winter months the annual estimate would be biased too warm. The large dating uncertainties of the sediment proxies has the effect of “flattening out” the temperatures so the true magnitude of the warm and cold periods are underestimated. The proxy temperature did not rise as sharply during the 20th century as the thermometer record did, indicating the instrument temperature record is biased high due to the uncorrected urban heat island effect (UHIE) and/or underestimated reconstructed temperature variations from the proxies. The Ljungqvist reconstruction will be adjusted here to account for the summer tree ring bias and the “flattening out” effect of the sediment proxies. Adjustment for Summer Tree-ring Bias The growing season in the ETNH is assumed to be from May through September. The Global Historical Climate Network (GHCN) CAMS gridded 2m analysis shows that the July temperatures are 29 °C warmer than the January temperatures, average over 2005-2015. The annual temperature were compared to the weighted average of the growing season months during two decades of the coldest part of the record, 1960 to 1979 and the warmest part to the record, 1995 to 2014, to determine the seasonal growing bias. The weighting factors were the taken from an analysis of tree growth in Oregon, USA here, Figure 5. The tree growth rate relative to June are shown in Table 2.  Table 2 – Tree Growth Rates Factors by Month May June July August September Growth rate relative to June 0.75 1 0.7 0.35 0.18 The actual annual and weighted monthly growing season temperature history over land in the ETNH is given in Figure 3. Figure 3. Actual annual and weighted average May through September temperatures of the extra-tropical Northern Hemisphere (30 – 90°N). The annual temperatures are indicated by the right vertical axis and the May – September growing season temperatures are indicated by the left vertical axis. The annual and tree growth rate weighted average growing season temperatures during two cold decades and two warm decades are given in Table 3.  Table 3 – Growing Season Bias of Tree-ring Proxies Annual °C Growing Season °C Annual/Growing Season 1960 – 1979 2.25 13.42 1995 – 2014 3.40 14.35 Difference 1.15 0.93 123% The annual temperatures show more change than the tree growing season temperatures indicating that the tree-ring proxies underestimate the temperature variability. Assuming that the seasonal temperature variability over the last century is similar to that over the last two millennia, the table indicates that tree-ring proxy temperature variability should be increased by 23%. Eight of the 30 proxies have this tree-ring seasonal bias. In addition to the tree-ring proxies, Ljungqvist identified 11 non-tree-ring proxies that are also subject to the summer seasonal bias. These proxies likely also underestimate the annual temperature variations and the Little Ice Age cooling, but no adjustment for them is made. Adjustment for Sediments Dating Uncertainty The dating uncertainty of sediment proxies are typically +/- 160 years. Ljungqvist writes, “The dating uncertainty of proxy records very likely results in “flattening out” the values from the same climate event over several hundred years … so they are unable to capture the true magnitude of the cold and warm periods in the reconstruction.” The actual decadal temperature variation is assumed to be some factor greater than the reconstruction variation. The smoothed sediment temperatures are assumed to be an average of all the actual temperatures from 50 year earlier to 50 year after the recorded time. A model of the reconstruction was created as a weighted average of the smoothed temperatures of the 12 sediment proxies and the actual temperatures of the 18 non-sediment proxies. The factor of 1.12 was choose such that the difference between the model and the reconstruction summed over 50 years centered on each of the MWP maximum and the LIA minimum, is minimized. The result of this is shown in Figure 4. Figure 4. Estimated Effect of Sediment proxy smoothing due to dating uncertainty. The actual temperature variation was estimated at 1.12 times the Ljungqvist reconstruction variation about the mean temperature of the MWP and the LIA extremes. Total proxy adjustment The weighted average proxy bias adjustment is shown in Table 5.  Table 5 – Proxy Bias Adjustments Type Number Proxies Bias Adjustment Factor Tree-ring season bias 8 122.8 % Sediment smoothing bias 12 112 % Other proxies 10 100 % Total 30 110.9 % Adjustment for Global Temperatures The southern hemisphere and tropics temperature variability is less than the northern extra-tropics due to the larger ocean area, so the global temperature variations over the last 2000 years would be less than the northern exotropics. Ideally we should use a temperature reconstruction for the entire globe, but there are too few proxies for the tropics and southern hemisphere. Table 4 shows the temperature for the ETNH and the globe for the coolest and warmest two-decade period as recorded by HadCRUT4.4. The table indicates that the global temperatures vary by only 80% of the ETNH. It is assumed that this holds true during the last two millennia.  Table 4 – Global and Extra-tropical Northern Hemisphere Temperature Variations Global °C ETNH °C Global/ETNH 1900 – 1919 -0.392 -0.280 1990 – 2009 0.396 0.667 Change 0.761 0.948 80.3% Total Millennium Cycle Adjustment The global natural recovery from the LIA from 1859 is determined by the average temperature change rate over the four segments of the last two millennia in the ETNH, adjusted for the tree-ring seasonal bias and the sediment smoothing bias, and converted to global values. The global natural recovery from the LIA is estimated at 0.084 °C/century, which is the product of the 0.095 °C/century for the ETNH, the proxy bias of 110.9 % and the global adjustment of 80.3%. Note that this doesn’t include the seasonal biases of the sediment proxies, so it might underestimate the actual natural warming. The temperature change over the 1.33 centuries between the midpoints of the two periods used in the climate sensitivity analysis is reduced from 0.72 °C by the 0.11 °C natural warming to 0.61 °C of anthropogenic warming. The best estimate of ECS is reduced to 1.22 °C [likely range 0.95 – 1.55 °C] and the best estimate of TCR is reducd to be 1.02 °C [likely range 0.85 – 1.25 °C]. These estimate do not include an adjustment for the UHIE. These uncertainty ranges have the same spread as determined by Lewis and do not include additional uncertainty due to the millennium warming cycle. Adjustment for the Urban Heat Island Effect Numerous papers have shown that the UHIE contaminates the instrument temperature record. A study by McKitrick and Michaels 2007, summary here 6, showed that almost half of the warming over land in instrument data sets is due to the UHIE. A study by Laat and Maurellis 2006 came to identical conclusions. Note that the IPCC dismissed the overwhelming evidence of UHIE contamination by falsely claiming “the locations of greatest socioeconomic development are also those that have been most warmed by atmospheric circulation changes”. That is, our cities were built where there is the most natural warming, a nonsensical claim. Climate models do not show such correlation which refutes the claim. A study by Watts et al presented at the AGU fall meeting 2015 showed that bad siting of temperature stations has resulted in NOAA overestimating US warming trends by 59% since 1979. The HadCRUT4 analysis does not have a specific correction of the UHIE. The GISS temperature dataset uses an UHIE adjustment routine that applies a temperature trend change in the wrong direction in 45% of the adjustments. Instead of eliminating or reducing the urbanization effects, these “wrong way” corrections make the urban warming trends steeper, as shown here7. The McKitrick and Michaels 2007 result show that the HadCRUT temperature index trend from 1979 to 2002 over land would decline from 0.27 °C/decade to 0.13 °C/decade. The UHIE over land is about 0.14 °C/decade, or 0.042 °C/decade on a global basis since 1979. The UHIE correction over the period 1980 to 2008 is 0.10 °C. This reduced the temperature change due to greenhouse gas forcings to 0.51 °C over the two periods 1859-1882 and 1995-2011 used in the analysis. We assume no UHIE before 1980 and the UHIE warming trend continues to 2011. The best estimate of ECS is 1.02 °C and the best estimate of TCR is 0.85 °C. Summary of Climate Sensitivity Estimates Table 6 summarizes the ECS and the TCR best estimate, likely and extremely likely confidence intervals for 5 cases. All forcing-based estimates use initial and final periods of 1859-1882 and 1995-2011, respectively. Ranges are to the nearest 0.05°C.  Table 6 – Estimates of Equilibrium Climate Sensitivity and Transient Climate Response with Uncertainty Ranges. ECS Best Estimate ECS 17-83% range °C ECS 5-95% range °C TCR Best Estimate TCR 17-83% range °C TCR 5-95% range °C IPCC AR5 n/a 1.5-4.5 1-n/a 1.8 1-2.5 n/a-3.0 Using AR5 Forcings 1.64 1.25-2.45 1.05-4.05 1.33 1.05-1.80 0.90-2.50 As above but with Stevens’ Aerosol Forcing 1.45 1.20-1.80 1.05-2.20 1.21 1.05-1.45 0.90-1.65 As above but with Natural Millennium Warming 1.22 0.95-1.55 0.80-1.95 1.02 0.85-1.25 0.70-1.45 As above but with UHIE Correction 1.02 0.75-1.35 0.60-1.75 0.85 0.70-1.10 0.55-1.30 The best estimate TCR of 0.85 °C implies that the global temperature will increase from 2016 to 2100 due to anthropogenic CO2 emissions by only 0.57 °C if atmospheric CO2 continues to increase at the current rate of 0.55%/year. Actual temperatures may rise or fall depending on natural climate change. Note the discrepancy between the ECS upper 83% limit of 4.5 °C as reported in AR5 versus the calculated upper limit of 2.45 °C using the AR5 reported forcings and heat uptake estimates from empirical measurements. The AR5 reported value is based on “expert judgment” using clues from the paleoclimate record, from climate model outputs and observation based studies. Climate sensitivity estimates based on the paleoclimate record assumes the only natural forcing is from tiny changes in the total solar irradiance, however it is extremely likely that indirect solar effects due to changes in solar ultraviolet intensity and the solar magnetic field have a much greater effect on climate, so these estimates have no value. The climate model estimates only reflect the modelers’ biases and guesses of how aerosols, clouds and upper atmosphere water vapor will change in response to warming. Modeler observe that the amount of clouds have generally declined with warming during the 20th century, and assumed that the cloud decline was caused by the warming, interpreting the change as a positive cloud feedback. But detailed analysis of clouds show that the amount of clouds declined due to natural causes that allowed more sunlight in to warm the planet. Radiosonde and satellite measurements show that upper atmosphere water vapour declines with warming, but climate models predict the opposite behaviour. The scientific method requires that when theory conflicts with empirical measurements, the theory should be modified to agree with the measurements. The IPCC falsely treats computer model output as evidence of climate sensitivity. Climate sensitivity estimates should be based only on observational studies during the instrument period, and climate models should be changed to agree with the lower observation based climate sensitivity estimates. Social Cost of Carbon The US Government’s Interagency Working Group (IWG) on Social Cost of Carbon (SCC) uses three Integrated Assessment Models (IAM) to determine the social costs and benefits of greenhouse gas emissions. Two of these models, DICE and PAGE, do not include the benefits of CO2 fertilization and other benefits of warming, and fail to account for adaptation. The FUND model does include these benefits, but arguably underestimates the benefits of CO2 fertilization. Idso (2013) found that the increase in the atmospheric concentration of carbon dioxide that took place during the period 1961-2011 was responsible for increasing global agricultural output by$3.2 trillion (in constant 2005 US$). According to written testimony for the House of Representatives Committee on Natural Resources by Dr. Patrick Michaels here8, the FUND model may have underestimated the CO2 fertilization effect by a factor of 2 to 3. The IWG acknowledges that the three IAMs treat CO2 fertilization very differently, but they claim the IAMs were selected “to reflect a reasonable range of modeling choices and approaches that collectively reflect the current literature on the estimation of damages from CO2 emissions.” Michaels wrote here, “This logic is blatantly flawed. Two of the IAMs do not reflect the “current literature” on a key aspect relating to the direct impact of CO2 emissions on agricultural output, and the third only partially so. CO2 fertilization is a known physical effect from increased carbon dioxide concentrations. By including the results of IAMs that do not include known processes that have a significant impact on the end product must disqualify them from contributing to the final result. …. Results should only be included when they attempt to represent known processes, not when they leave those processes out entirely.” The DICE model assumes that the optimum climate for humanity was the pre-industrial climate of 1900, which was near the end of the Little Ice Age, and any temperature increase since then is harmful. Testimony by Dr. Mendelsohn to the Minnesota Public Utilities Commission here9 shows that there is no evidence that the temperature increase since 1900 caused any damages, and such damages would be easily detectable. He suggests that net damages would not occur until temperatures are 1.5 to 2.0 °C about pre-industrial levels, equal to 0.7 to 1.2 °C above current temperatures. This model does not include benefits of warming. Heat and cold related mortality is a major component of the SSC. An international study analyzing over 74 million deaths across 13 countries found that cold weather kills 20 times as many people as hot weather. Statistics Canada 2007-2011 data shows that there are Canadian death rate in January is 100 deaths/year greater than in August, as shown here10. Clearly the optimum temperature is greater than current temperatures. The DICE model produces future sea level rise values that far exceed mainstream projections and exceed even the highest end of the projected sea level rise by the year 2300 as reported in the AR5 report. Dr. Robert Mendelsohn testified here9 that the PAGE model “not well grounded in economic theory” and it uses an “uncalibrated probabilistic damage function”. Mendelsohn says here11, “The version of the PAGE model used by the IWG explicitly does not include adaptation. Failing to include adaptation vastly overstates the damage that climate change will cause.” For these reasons this report uses only the FUND model to determine the SCC. The FUND model was developed by Dr. Richard Tol, Professor of Economics at the University of Sussex, UK. The FUND model shows that Canada benefits from emissions by 1.9% of gross domestic product by 2100, equivalent to a benefit of$109 Billion annually in 2015 dollars when assuming an ECS of 3 °C. Anthropogenic climate change will have only positive impacts in Canada which increase throughout the 21st century.12

Figure 5 shows the SCC (blue line) as a function of ECS. The ECS best estimate is indicated by the red square. The thick red line shows the 17-83% probability range, and the thin red line shows the 5-95% probability range of the ECS estimate. The FUND model values were provided by Dr. Richard Tol in testimony to the Minnesota Public Utilities Commission, Table 3, here13. The SCC values assume a real discount rate of 3%.

Figure 5. The equilibrium climate sensitivity (ECS) as calculated by N. Lewis using aerosol forcing by Stevens, other forcings and heat uptake by IPCC AR5 and global surface temperatures adjusted to account for natural millennium cyclic warming and urban warming from 1980. The ECS best estimate is shown by the red square, uncertainty ranges by the red lines. Social cost of carbon as determined by the FUND integrated assessment model is shown by the blue line.

Projecting the ESC values vertically on the blue SSC vs ECS curve gives the best estimate and confidence intervals of the SCC, as indicated in Figure 6.

The analysis shows that on a global basis, the best estimate of ECS of 1.02 °C, gives a SCC of -17.7 US$/tCO2, which is very beneficial. The likely range is -19.7 to -13.6 US$/tCO2, and it is extremely likely to be less than -7.7 US$/tCO2. These results show that instead of imposing a carbon tax on fossil fuels, there should be a subsidy equal to about 18 US$/tCO2.

The benefits of CO2 fertilization, reduced cold weather related mortality, lower outdoor industry costs such as construction costs, increased arable land area and reduced heating costs greatly exceed harmful effects of warming on a global basis.

Figure 6. Social Cost of Carbon in US$/tCO2 indicating best estimate, likely 17-83%, and extremely likely 5-95% uncertainty ranges. The uncertainty ranges do not include uncertainty associated with the millennium warming cycle or the urban warming effect. The social cost of carbon as determined by IAMs requires numerous assumptions that are not based on science or economics. It depends on assumption of choices future consumers, voters and politicians make many decades and centuries into the future. The SCC in part assumes governments fail to take appropriate action to mitigate flooding due to sea level rise. Dr. Tol explains, “the causal chain from carbon dioxide emission to social cost of carbon is long, complex and contingent on human decisions that are at least partly unrelated to climate policy. The social cost of carbon is, at least in part, also the social cost of underinvestment in infectious disease, the social cost of institutional failure in coastal countries, and so on.” In Conclusion The climate is much less sensitive to greenhouse gas emissions than is commonly believed. The expected climate change at the time of doubling the CO2 concentration is about 0.85 °C, which at current CO2 growth rates would take about 125 years. The social cost of carbon, is likely in the range of -20 to -14 US$/tCO2, with a best estimate of -18 US\$/tCO2. The benefits of CO2 fertilization and warming are much greater than the harmful effects of warming. Emissions are very beneficial.

References

1. The Implications for Climate Sensitivity of Bjorn Stevens’ New Aerosol Forcing Paper, by Nicholas Lewis, Climate Audit, March 2015. http://climateaudit.org/2015/03/19/the-implications-for-climate-sensitivity-of-bjorn-stevens-new-aerosol-forcing-paper/
2. The implications for climate sensitivity of AR5 forcing and heat uptake estimates, by Nicholas Lewis and Judith Curry, Climate Dynamics, September 2014. https://niclewis.wordpress.com/the-implications-for-climate-sensitivity-of-ar5-forcing-and-heat-uptake-estimates/
3. Rethinking the Lower Bound on Aerosol Radiative Forcing, by Bjorn Stevens, Journal of Climate, June 2015. http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-14-00656.1
4. IPCC, WG1, Annex II: Climate System Scenario Tables, 2013. http://www.ipcc.ch/pdf/assessment-report/ar5/wg1/WG1AR5_AnnexII_FINAL.pdf
5. A New Reconstruction of Temperature Variability in the Extra-tropical Northern Hemisphere During the Last Two Millennia, by Fredrik Ljungqvist, Geografiska Annaler Series, 2010. http://agbjarn.blog.is/users/fa/agbjarn/files/ljungquist-temp-reconstruction-2000-years.pdf
6. Background Discussion On: Quantifying the Influence of Anthropogenic Surface Processes and Inhomogeneities on Gridded Global Climate Data, by Ross McKitrick and Patrick Michaels, Journal of Geophysical Research-atmospheres, December 2007. http://www.rossmckitrick.com/uploads/4/8/0/8/4808045/m_m.jgr07-background.pdf
7. Correct the Corrections: The GISS Urban Adjustment, by Ken Gregory, Friends of Science, June 2008. http://www.friendsofscience.org/index.php?id=396
8. An Analysis of the Obama Administration’s Social Cost of Carbon, by Dr. Patrick Michaels, U.S. House of Representatives Committee on Natural Resources, July 2015. http://www.friendsofscience.org/index.php?id=2153
9. Rebuttal Testimony on Socioeconomic Costs, Minnesota Public Utilities Commission, by Robert Mendelsohn, August 2015. http://www.friendsofscience.org/assets/files/Mendelsohn_Rebuttal_20158-113190-05.pdf
10. Winters not Summers Increase Mortality and Stress the Economy, By Joseph D’Aleo and Allan MacRae, May 2015. http://www.friendsofscience.org/index.php?id=2132
11. Sur-Rebuttal Testimony on Socioeconomic Costs, Minnesota Public Utilities Commission, by Robert Mendelsohn, September 2015. http://www.friendsofscience.org/assets/files/Mendelsohn_Sur-Rebuttal_20159-113912-05.pdf
12. Climate Economics, Edward Elgar Publishing Limited, by Richard Tol, 2014.
13. Rebuttal Testimony on Socioeconomic Costs, Minnesota Public Utilities Commission, by Richard Tol, August 1015. https://www.edockets.state.mn.us/EFiling/edockets/searchDocuments.do?method=showPoup&documentId=%7bA39ADC16-205E-44D3-B080-BC19501F3247%7d&documentTitle=20158-113190-07

The data and calculations are at;

This article, with a section on Alberta’s climate change plan, is available in PDF format at; http://www.friendsofscience.org/index.php?id=2205

http://www.friendsofscience.org/assets/documents/AB_carbon_tax_Economic_Impact_Gregory_Summary.pdf

This entry was posted in AGW, Climate Change and tagged . Bookmark the permalink.

### 34 Responses to The Economic Impact of Greenhouse Gas Emissions

1. Javier says:

Thanks for the interesting, provocative and well written article, Ken.

It goes a long way to answer the question I posed in the previous article in this blog:
http://clivebest.com/?p=7134#comment-9867
about the startling difference between proxy degrees and instrumental degrees.

I wonder however how much your sensitivity calculation depends on Ljungqvist reconstruction. Moberg 2005 reconstruction shows a lower peak two thousand years ago, and Mann’s reconstructions, well, don’t show much of a peak anywhere.

Other than that I have to say that I fully agree on the main points of your article:
– Global warming has been so far mostly beneficial while dangers remain hypothetical.
– Natural warming and specially solar variability are not understood nor accounted for.
– UHIE is not properly accounted for.
– CO2 climate sensitivity must be in the low range or even lower.

Asking for carbon subsidies in the current political climate made me laugh hard.

2. Clive Best says:

Very interesting article Ken! I was unaware of the Ljungqvist paper. There is anyway other evidence for the MWP and LIA in both the advance and retreat of mountain glaciers and ice-rafted detritus in the North Atlantic (Bond et al 2001). In fact the MWP and LIA are interpreted as the last warm and cold period of a Bond cycle. So there is little doubt that since ~1850 the Earth has been recovering from the LIA. The only question is what percentage of the observed 0.8C warming is due to that recovery. Of course the models ignore this completely.

The scientific method requires that when theory conflicts with empirical measurements, the theory should be modified to agree with the measurements.

The assumption so far seems to have been that the empirical measurements must be wrong if they don’t agree with theory – so they must need ‘correcting’. Aerosols are a good example.

If ECS really is < 1.5C then there is nothing to worry about and we can all go home. 😉

• Ken Gregory says:

We have climate policy to worry about. Apparently Ontario plans to ban the use of natural gas, and give huge subsidies to solar and wind project, and electric cars. The links at the end of the article gives information on Alberta’s climate plans. Those policies are disastrous.

• Clive Best says:

Yes climate policy based on replacing fossil fuels by current renewables at high latitudes will cause immense damage. PV solar energy is marginally worthwhile in desert regions but makes no sense at all in high latitudes like in Canada. The EROI is about the same as the ancient Egyptians enjoyed. Plus this analysis was done for Spain – Pedro Pietro.

Wind energy has a better EROI but is hampered by reliability and low energy density. If you wanted to power Canada on batteries for 24 hours in winter without wind they would need to store an energy equivalent of 4 Mt of TNT – 4 large nuclear warheads.

3. Ron Graf says:

Ken, great article. I was in a debate at “andthentheresphysics” (AKA Ken Rice) this week on this same issue on his post trying to debunk a new paper by J. Ray Bates here discussed on Bishop Hill titled How Low can ECS Go? Their analysis leads to EfCS is ~1C (Effective Climate Sensitivity).

Rice’s post did the energy balance plugging 1C observed warming, top of atmosphere TOA imbalance of 0.1 W/m2 in 1850 and 0.7 W/m2 2015, using 3.7 for Plank feedback and 2.3 W/m2 for GHG forcing to come up with 1.7C EfCS. Rice was illustrating that 1.0C is impossible even with no positive feedbacks, which the IPCC (and Rice) assume double any warming.

I had the audacity to go into the lion’s den and tell his readers that there are two significant factors left out of Rice’s analysis, coincidentally, the same ones you outlined above, centennial/millennial variability and UHI – micro-site contamination of the land record. (Ken Gregory did a much better job than I did.) To demonstrate that a rebound was due by natural cycle, I pointed to the OCEAN2K paper, McGregor (2015) showing the period 1250-1700CE, just before the pre-industrial age was found to have a 0.7C drop in sea surface temp SST. McGregor delayed publication a couple of years and added co-authors to do an GCM analysis to find attribution. High volcanic activity was the decided on primary fingerprint. They readily admitted that this did not preclude contribution to the decline from solar inactivity and M-Cycle decay. Although the graph was in SD units rather than deg C they are about equivalent as one can calculate from the table in the supplemental table S7.

I also argued the UHI point. By pointing to Kim(2002) and Hamdi(2011) I showed UHI is a real and quantified effect.that mainly raises daily minimum temperature Tmin on cool, clear nights. This can be seen by a trend of reducing diurnal temperature range DTR over the years in growing cities. Hamdi diagnosed a 2.5X divergence in trend from Brussels to its rural control.

Analysis of urban warming based on the remote sensing method reveals that the urban bias on minimum air temperature is rising at a higher rate, 2.5 times (2.85 ground-based observed) more, than on maximum temperature, with a linear trend of 0.15?C (0.19?C ground-based observed) and 0.06?C (0.06?C ground-based observed) per decade respectively…1955-2004.

When I confronted Steven Mosher with this he ignored it and claimed there was no clear way to diagnose UHI except using the Menne-type analysis of looking at neighboring cities and inferring an adjustment. The problem I see with that is when two or more neighboring sites are in growing simultaneously and are experiencing the same type of UHI trend.

DTR seems to be a promising avenue of investigation. To date all I can find is Vose(2005) as the most up to date study using 70% of land grid. Vose made little attempt at attribution besides mentioning cloudiness and land use trends. Vose found a 0.035C/dec trend from 1950-2004 but all the rise was before 1979, and the lion’s share of that was 1950-1960. He also used the Menned-type methods to “clean” his data, which may have erased UHI events.

I think DTR needs to be investigated as a possible diagnosis tool for UHI. Also, as Kim showed, rainy and windy days completely erase UHI. So using meteorological records can also be used.

• Ken Gregory says:

The GISS UHIE adjustment compares trends of urban sites to rural sites. The problem with this, as you say, it that the rural sites generally have similar urban heat island trends as the urban sites. Most of the so called rural sites are also contaminated by the effects of economic development. A study by Dr. Roy Spencer presented in 3 blog posts and summarized by me at: http://www.friendsofscience.org/index.php?id=2224
shows that site locations with very low population densities of 5 or 10 persons/sq. km have significant heat island warming. Significantly, a doubling of population at low population densities have a greater warming than at higher population densities as shown by the graph at that link. The GISS UHIE adjustment it totally useless.

A new paper by Dr. John Christy at http://journals.ametsoc.org/doi/10.1175/JAMC-D-15-0287.1
says “Summer TMax is a better proxy, when compared with daily minimum temperature and thus daily average temperature, for the deeper tropospheric temperature (where the enhanced greenhouse signal is maximized) as a result of afternoon convective mixing. Thus, TMax more closely represents a critical climate parameter: atmospheric heat content.” The UHIE affects TMin much more than TMax.

I agree with you that “DTR needs to be investigated as a possible diagnosis tool for UHI”.

• Steven Mosher says:

Ron tells some porky’s

I spent days explaining why he was wrong.

“I think DTR needs to be investigated as a possible diagnosis tool for UHI. Also, as Kim showed, rainy and windy days completely erase UHI. So using meteorological records can also be used.”

DTR cannot be used to identify UHI

1. UHI SOMETIMES will cause a NARROWING of DTR.. basically
TMIN is increased so TMAX-TMIN is NARROWED.

2. IF UHI, then sometimes DTR is narrowed.

Ron wants to affirm the consequent, and has argued that IF DTR Narrows, then
UHI

Problem:

From 1850 to 1900 DTR WIDENS about ,5C as I pointed out to him
haha widening DTR must mean negative UHI in the record
Then It narrows for another 50 years,
Then It Widens
Then it is Trendless for 20 or so years

Simply: DTR is not a reliable indicator of UHI. Ron thought it was. he read Vose 2005
I pointed him at the paper we just published with Thorne. Guess he didnt read it.
typical.

And for his wind and rain blather, he has yet to actually identify a test.

1. if we look at the windiest city in the US.. ( all the UHI blown away) Guess what?
It matches the global trend
2. if we look at the rainiest cities in the Europe ( rain over 230 days a year) Guess what?
they match the global trend.

4. Steven Mosher says:

“Numerous papers have shown that the UHIE contaminates the instrument temperature record. A study by McKitrick and Michaels 2007, summary here 6, showed that almost half of the warming over land in instrument data sets is due to the UHIE. A study by Laat and Maurellis 2006 came to identical conclusions.”

The only problem is there are deep data problems with McKittricks study which I pointed out to him but he refused to fix.

Start here

https://judithcurry.com/2012/06/21/three-new-papers-on-interpreting-temperature-trends/#comment-211553

The mistake he made was rather gross and I was surprised that more people didnt catch it

he associates socioeconic factors ( more on that later) like population growth with UHI.
The Mistake he made was in allocating population spatially. he did a uniform allocation.
So for example he took the entire US population and spread it UNIFORMLY according to 5 degree bins. This gives alaska the same population density as chicago.

next he failed to QC his allocation. So the Island of St Helena gets the same population as England and Antarctica got the population of france.

When I used actually population densities ( Hyde, or GWP) His results fell to pieces.

More over his regression did silly things like use literacy as a regressor.

• Ken Gregory says:

I sent Dr. Ross McKitrick an email. (It bounced from an address for Lise Tole.)
Subject: Population and GDP Density in your Paper
Hi Ross and Lise,

Climatologist Steve Mosher has criticized your paper titled “Evaluating Explanatory Models of the Spatial Pattern of Surface Climate Trends Using Model Selection and Bayesian Averaging Methods”. In a comment on my guest post on “The Economic Impact of Greenhouse Gas Emissions” on Clive Best’s blog here, he says here (May 23, 2016 at 2:35 am) “he took the entire US population and spread it UNIFORMLY according to 5 degree bins. This gives Alaska the same population density as Chicago.” In Canada, the grid cell at Colville Lake, NWT had the same population density as the grid cell at Montreal, Quebec, being 3.3 persons/km2. I am surprised that you did not use the “Gridded Population of the World”, see here. http://beta.sedac.ciesin.columbia.edu/data/set/gpw-v4-population-density

Population densities are shown in the map below. The population density varies greatly by location across North America. Also, your file “MMJGR07.csv” gives all areas within Canada the same GDP density, and all areas within USA the same GDP density. These values vary greatly among the Canadian provinces and territories and the USA states. Area and GDP by province, territory and state are easily obtained. Why did you not use more detailed population density and GDP density data in your analysis?

5. Steven Mosher says:

“DTR seems to be a promising avenue of investigation. To date all I can find is Vose(2005) as the most up to date study using 70% of land grid. Vose made little attempt at attribution besides mentioning cloudiness and land use trends. Vose found a 0.035C/dec trend from 1950-2004 but all the rise was before 1979, and the lion’s share of that was 1950-1960. He also used the Menned-type methods to “clean” his data, which may have erased UHI events.”

Ron ignores the new paper I pointed him at

Why?

It blows his argument up

http://onlinelibrary.wiley.com/wol1/doi/10.1002/2015JD024583/full

here is the companion piece we worked on

http://onlinelibrary.wiley.com/wol1/doi/10.1002/2015JD024584/full

Bottom line Ron thinks you can use DTR to Diagnose UHI.
you cant.
why?
because there are multiple cause for changes in DTR

A good analogy

Mann and others think you can use Tree rings ( which widen and narrow) to
diagnose temperature.
but there are many things that cause rings to widen and narrow.

Same with DTR. many things cause it to narrow and widen
Since 1850 it has widened, narrowed, widened again, and gone trendless recently

Nice try guys.

First citing McKittrick who has 50 million people living on St helena and now DTR dreams

• Clive Best says:

There is a strong UHI effect but there is a subtle reason why it has a small effect on temperature anomalies. Large cities are 2-3C warmer than the surrounding countryside. Similarly a plateau is 2-3C cooler than the valley 500m below it. When you calculate temperature anomalies you normalise out this offset before you make a regional average.Therefore you are calculating the average DT across all cities, valleys and plateuax in a region.

Where UHI is important is when a city develops very fast over the time period being studied. Then DT can be dominated by the increase in Urban Heat. The analogous situation would be slowly moving a weather station down a slope by 500m. Does this happen in practice? Yes it does. A good example is Sao Paolo which grew from a small town to one of the world’s most populated cities in just 80 years.

The red curve shows auto-adjustments made by Stephen Mosher? (NCDC). Green curve shows CRUTEM4. At the same time Sao Paolo trend will tend to correct surrounding stations downwards in the past I suspect. There are many other similar examples – Beijing, Tokyo, Moscow etc.

• Javier says:

Since 1950 the world’s population has increased from 2.5 billion people to 7.4 billion people, and at the same time it has increasingly become urbanized. This process has affected measurements from almost every thermometer at different time periods with different rates of warming and intensity. This contamination is real. The end result can be measured by anybody (see figure from Warren Meyer). However due to its nature it is impossible to correct, measure, or adjust, as we have no way of knowing when it took place or by how much for each thermometer. Luckily since 1979 we have satellites that are immune to UHI effect, and the satellites show less warming, as the theory predicts.

• Ron Graf says:

Steven Mosher:

Bottom line Ron thinks you can use DTR to Diagnose UHI. you cant. why? because there are multiple cause for changes in DTR.

Steven, your above statement is only true if UHI only intermittently diminishes DTR. If the effect is consistent then all that is needed is to experiment while controlling the other variables that can affect DTR. Kim(2002) and other studies have done the ground work for how to set up experiments. Hamdi(2011) reported on DTR affected by UHI but did not separate out according to weather conditions as I would suggest.

Since the effect could vary with temp conditions I would break out 4 bins of temperature ranges for Tmin. I would use meteorological data to separate windy or rainy days from clear and calm was to establish a direct control for each station and the group mean trends.

Since GW should have an near equal trend regardless to weather, the lack of a warming trend in windy or rainy should be an accurate estimation of land temp trend.

Additionally, since UHI has a near neutral affect on Tmax, but GW should not affect Tmax, thus I would compare the Tmax trends of the windy, rainy vs. the calm clear to check for consistency to prove the control.

The Tmax trends of the sample data chosen to be investigated could be used a comparator to the entire global land database to see if Tmax is a consistent trend. This can validate the study.

I would use raw data adjusted only for highly likely errors. I would not even adjust for systematic bias unless it could affect TMax differently than TMin.

BTW, could Vose(2005) 0.06C dDTR from 1950-1960 have been a caused by time of day observation changes from the old to new protocol or was that all in the 1960s?

“Ron ignores the new paper I pointed him at.”

You pointed at about a weeks worth of reading. Thanks for prioritizing that one for me.

I am interested in good science wherever that takes us. Thank you for giving me and others the opportunity to engage.

6. Something to ponder. If the ECS is < 1.2C, then that implies that non-Planck feebacks are 0, or slightly negative. Given this, how can there be internally-driven Millenial Cyclic Warming?

• Javier says:

The Millennial Cycle is not internally-driven. It is caused by Solar variability.

Bond et al., 2001

• Okay, so it’t not internally-driven, it’s solar. Based on the figure in the post, it has an amplitude of around 0.4C. If the ECS is < 1.2C then non-Planck feedbacks are going to be slightly negative. Hence, if the Planck response is 3.2W/m^2/K, an externally-driven cycle with an amplitude of 0.4C would require a change in external forcing of at least 0.4 x 3.2 = 1.3W/m^2. This seems a good deal larger than the estimated changes in solar forcing.

• clivebest says:

Solar variability during the Holocene is probably just under 1 Watt/m2. However the net variability in UV flux is 10 times higher which dramatically changes the amount of stratospheric ozone. Ozone is a greenhouse gas.

I also doubt whether you can equate an external forcing (solar) with an internal forcing (CO2). The first is like turning up the heating in your house, while the later is like putting on another jumper.

• Clive,
None of that is a response to the basic point that the variation in solar forcing is probably a few tenths of a W/m^2 and somehow you’re suggesting that this can produce a change in global surface temperature of around 0.4C, while a change in GHG forcing of 3.7W/m^2 would only produce a change in global surface temperature of around 1C. So, you’re arguing that we’re much more sensitive to one change than to another. So, somehow the system knows what’s producing the warming?

Also, as far as UV goes, my understanding is that the total UV flux is maybe a few hundred mW/m^2 (so, from a forcing perspective, only tens of mW/m^2). Even if this is highly variable, it’s still only varying by 10s of mW/m^2 and most of the UV flux does not reach the surface.

• Clive Best says:

Of course there’s a difference.

Solar radiation warms any point in the surface only during the day peaking at 12 noon. To average 1 W/m2 over 24 hours means a peak increase of ~ 4W/m2.

GHG ‘forcing’ is more effective at night when convection is inhibited. It is convection/evaporation that mostly cools the surface during the day.

They are not equivalent.

• I didn’t say they were exactly equivalent. However a change in solar forcing of x W/m^2 and a change in GHG forcing of x W/m^2 still refer to the change in planetary energy imbalance prior to any response. If you think that somehow we’ll warm much more when the change in forcing is solar, compared to when it is due to GHGs it might be nice if you could illustrate how this could happen. In both cases we’re accruing – on average – the same amount of energy per square metre per second and so a very different response would be fascinating.

• Clive Best says:

An increase of 1W/m2 in solar energy is a smaller increase in net entropy radiation ~DQ/T where T 5000K. An increase in GHG ‘forcing’ is a much larger increase in entropy ~DQ/T where T= 288K

• Clive,
Are you sure? The change in entropy is

dS = dQ/T,

where T is the temperature of the system. If I add dQ of energy from the Sun, the T is still the T of our system, not the T of the Sun. If I add dQ of energy because of an increase in GHGs, the T is still the T of our system.

What you seem to be suggesting is that if we were in equilibrium (Ein = Eout) and solar forcing increased by 1W/m^2 (i.e., Ein = Eout + 1W/m^2) the response would be very different to if GHGs increased so as to produce a change in forcing of 1W/m^2. In both cases Ein = Eout + 1W/m^2. Why would we expect the system to warm very differently in the former case to the latter?

• Ron Graf says:

What role does entropy play? Christopher Essex (1984) is reading above my pay grade but the conclusion is that the net entropy results from 5500k directional radiation converting to 255K omnidirectional radiation.

• Clive Best says:

Pretty sure. Think about it. The earth is in energy balance over long time periods. Ein = Eout. Yet the BB wavelength difference between incoming and outgoing radiation falls by nearly 5000K. That entropy increase is what drives all weather and life on earth. Therefore increasing incoming solar energy decreases net entropy far more than decreasing Eout.

• Clive,
It’s getting late, so I’ll have to think about this a bit more. Here’s a post by Nick Stokes that appears to be suggesting that what’s really relevant is the difference between the surface temperature (288K) and the temperature at which we radiate to space (255K). Here is another one, which says

Comparing with and without GHG:

With GHG there is less entropy created by incoming SW at the warmer surface – 714 W/K/m2 vs 820.

With GHG the total export of entropy is higher – 1006 W/K/m2 vs 922.

The entropy gain from ground to shell of 220 W/K/m2 can be associated with heat engine effects in the atmosphere, or with life processes etc. The colder shell is necessary to export the difference.

Clearly the change in entropy of the universe has to depend on the temperature of the Sun, but the idea that we would warm much more if we received 1W/m^2 from the Sun than if we were receiving 1W/m^2 from enhanced GHGs just seems a little odd.

• Ron Graf says:

Trends in solar spectral irradiance variability in the visible and infrared
Harder(2009) covers the solar spectral variability and hypothesizes effects on climate, but nothing solid. They do mention ozone production being affected but don’t bring up it being a GHG.

I am thinking anomalies in earth’s magnetic field could lead to the radiation belt weakening and leaving the upper atmosphere more solar storm vulnerable.

• Clive Best says:

The entropy of radiation emitted by the sun is $\frac{4U}{3T}$ T is the effective ‘temperature’ of the photons. The SORCE paper finds”SSI values for wavelengths with a brightness temperature greater than 5770 K show a brightening with decreasing solar activity, whereas those with lower brightness temperatures show a dimming. ” The net effect is a reduction in radiative entropy incident on the earth at least a factor 3 larger the change in TSI.

• Ron Graf says:

McGregor(2015) attributed a 0.7C drop in SST over the interval 1-1700ce all happening between 1250-1700. They said it was mostly due to higher than average volcanic activity. Do you agree? If not volcanic or solar then what is main drivers of millennial variability. Or, do you dispute variability?

• I thought we’d established that your claim of a 0.7C drop in SST was incorrect?

Of course I don’t dispute variability. However, if you’re going to claim a much higher sensitivity to one type of forcing compared to another, you’re going to have to explain why that is the case. An ECS of ~ 1K implies non-Planck feedbacks are slightly negative.

• Ron Graf says:

“I thought we’d established that your claim of a 0.7C drop in SST was incorrect?”

I missed that one.

OCEAN2K(2015) is surely not the last word but I absolutely maintain their results show a 0.7C drop between 1100-1700ce. If you would like me to write to the authors I would be happy to:
?w=990&h=323

Increased volcanic, decreased solar, leading to documented glacial advance and increased northern hemisphere albedo is IMO what caused the LIA. It’s all reversible natural forcing. The cold ocean temps lagged this forcing and thus GMST warming is lagged the reversal of those natural dips in forcing into the 19th and 20th century.

What reversed the LIA? Was it solar strengthening, aerosol clearing, ocean currents, pre-industrial AGW? None preclude the others; it could be all.

• physicist; heat doesn’t ”radiate out into space” – a] kinetic energy / long-wave infrared doesn’t radiate for more than a micron from one molecule to other AND if is lots of heat as from fire, it radiates only few feet. b] most of the heat is NEUTRALIZED / CANCELED on the first 6-8km from the ground; by the ”cold vacuum” that penetrates through the thinner atmosphere – by 10km altitude ALL heat is neutralized – only occasionally big volcano or nuclear explosion can get higher and is easily canceled into that cold space -> therefore: CO2 doesn’t block any radiation of heat -> therefore: the established theology that you worship is WRONG AND BACK TO FRONT.

• Javier says:

…and Then There’s Physics, you are too focused on energy input and output, when the system clearly tells us that it is energy distribution what determines climate. At the Last Glacial Maximum and at the Holocene Climate Optimum the Earth received essentially the same energy, yet the climate was completely different. And if you think that you can explain that it terms of latitudinal and seasonal distribution of insolation, think again, because 105 kyr ago 65°N summer insolation was 30 Kwh/m2 higher than 10 Kyr ago, and yet 105 Kyr ago the planet was in glacial conditions and 10 Kyr ago in interglacial.

Solar variability, due to its uneven distribution over the spectrum, has a different effect on the stratosphere and coupled troposphere than on the surface, and could perfectly well cause changes in the distribution of energy over the latitudinal gradient capable of provoking the changes observed in the millennial climate cycle.

There is too much that we don’t know about the climate to display the security that you do about what solar forcing does or doesn’t.

7. Ken Gregory says:

I posted an email to Ross McKitrick under Steve Mosher’s comment of May 23, 2016 at 2:35 am. McKitrick’s response in part was:
“The data set uses population and GDP growth rates over 1980-2000, not end-of-sample levels.The growth rates don’t vary as much within countries as do the levels.
Temperature data in GHCN (especially for Canada) is strongly skewed towards cities, so using national growth rates is not as bad an approximation as he makes it out to be. It is correct that for large landmass countries like Canada, the US and Russia there is a lot of population density variation within the borders, but what matters is within-country growth rate variation mainly across the major cities. It is true that my data set lacks this resolution, and while using the clustering adjustment for the regression errors takes it into account for the variance estimation, it still means a loss of resolution. As for whether it’s fatal in the context of a global data set, Steve threw a bunch of examples at me in the past, like Mongolia, and I re-ran results dropping them out of the data set, and the results didn’t change.

The BEST group tried to argue years ago that the urbanization issue doesn’t matter because they tested for it and didn’t find it. I showed in a paper in Climatic Change in 2013 ( http://link.springer.com/article/10.1007%2Fs10584-013-0793-5 ; http://www.rossmckitrick.com/uploads/4/8/0/8/4808045/encompassing.preprint.pdf ) that their test wouldn’t be able to find the effect whether it exists or not, because they don’t take into account the important distinction between growth rates and levels.”