## Energy balance and the oceans

Climate forcing relies on  an imbalance between incoming solar energy and outgoing IR energy at the TOA (top of atmosphere). The CERES instruments monitor radiation fluxes at the TOA and has measured this imbalance over the last 11 years [1]. Clouds remain  a major uncertainty and large seasonal swings in the “imbalance” are clearly observed. However when averaged over 12 months there reamins a small imbalance of about 0.6 +- 0.4 W/m2[2]. Figure 1 shows the CERES data for all skies from March 2000 to 2012 showing a net constant imbalance of about 0.5 W/m2.

Fig 1: CERES TOA energy imbalance for all skies. Data from http://ceres-tool.larc.nasa.gov/

This value agrees with the paper by Stephens et al [2] which quote a TOA imbalance of 0.6 +- 0.4 W/m2. Where is this missing energy going ? Are the Earth’s oceans absorbing the extra energy and if so, how fast is ocean temperature rising ?

Ocean heat content can be  derived from the measured temperature profiles of the Earth’s oceans [3]. The heat content is calculated by using the specific heat for a given depth/volume of water and the temperature change. Figure 2 shows the resultant OHC anomaly(change) for the top 700m of global oceans.

Fig2: Heat content anomaly 0-700m relative to 1980. The dashed blue curves show the uncertainty in the global value. The red and magenta curves show the Norther and Southern hemispheres respectively

AGW theory predicts a TOA “forcing” due to the increase CO2 from pre-industrial times given by F = 5.3 ln(C/C0) W/m2. A step  increase from 280 to 400ppm would therefore cause an instantaneous 1.9 W/m2 imbalance at the TOA (forcing). The data show instead a smaller constant imbalance and a small heat absorption by the Oceans.  The big picture then is the following:

1. For at least the last 12-40 years there has been an imbalance between incoming and outgoing radiation of ~0.6 W/m2 at the TOA.
2. The measured heat content of the top 700m of the world’s oceans have apparently absorbed ~ 1.3×10**23 Joules of energy.

The heat content of the oceans is equal to the heat capacity of salt water times the change in temperature integrated over the entire mass of oceans. The total area covered by oceans in Earth is about 3.6×10**14 m2, and the heat capacity of water is 4×10**3 Jkg-1K-1. So this increase in heat content corresponds to a net temperature rise in the top 700m of about 0.15C. Now lets see if the integrated TOA imbalance integrated over the oceans can explain this rise.

Fig3: Ocean temperatures derived from heat content. For comparison the ERSST surface measurements are shown.

Assuming  constant net TOA forcing  since 1960 then the total energy imbalance since then = 0.6*3.6*10**14*(no.of secs since 1960) = 3.4*10**23  (+- 2.3*10**23) joules.   That is roughly 2.5 times greater the total ocean heat content (1.3*10**23 joules)  but  still just about compatible within errors.

Land surfaces warm and cool diurnally and seasonally because they react fast to any changes in climate forcings. If the earth surface was 100% land then radiative energy would always remain in balance via corresponding increases or decreases in surface temperature.  The oceans dampen out global temperature changes.  What seems to have happened over the last 50 years is that the oceans have been slowly absorbing extra CO2 forcing through moderate temperature rises. Other factors including changes in cloud cover may also play a role in dampening energy balance.

Fig 4: Seasonally averaged CO2 levels since 1960

Fig 4 shows the atmospheric CO2 data since 1960. The extra CO2 forcing over this timeframe is predicted to be ~1.2 W/m2 (5.3ln(400/320)). Only about half that amount is now observed in the TOA energy balance, so the rest apparently has been absorbed by the oceans.

CO2 forcing falls logarithmically with concentration so we would expect less absorption over the next 50 years even if emissions continue onwards at the same rate. The oceans would then take around  a further 50 years to reach equilibrium [4]. So if levels stabilize after ~2070 the  total increase in sea surface temperatures should still be less than ~1C.

[1] CERES - http://ceres.larc.nasa.gov/compare_products.php

[2] G. Stephens et al.  ”An update on Earth’s energy balance in light of the latest global observations”  NATURE GEOSCIENCE | VOL 5 | OCTOBER 2012

[3] http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/index.html &  Levitas et al GRL, 2002.

## H2O decreasing while CO2 rises !

Dire predictions of global warming all  rely on positive feedback from  water vapor. The argument goes that as surface temperatures rise so  more water will evaporate from the oceans thereby amplifying temperatures because H2O itself is a strong GHG.  Climate models all assume net amplification factors of between 1.5 and 6. But in the real world has the water content of the atmosphere actually been increasing as predicted?

NASA have just released their latest NVAP-M  survey of global  water content derived from satellite data and radio-sondes over the period from 1988 to 2009. This new data is explicitly intended for climate studies . So lets take a look at the comparison between actual NVAP-M atmospheric H2O levels and those of CO2 as measured at Mauna Loa. I have extracted all the daily measurement NVAP-M data and then calculated the global average. Figure 1 shows the running 30 day average of all the daily data recorde between 1988 and 2009 inclusive. The 365 day (yearly) running average is also shown. Plotted on the right hand scale are the Mauna Loa CO2 concentration data in red over the same period.

Fig1: Total precipitative H2O (running 30 day average) compared to Mauna Loa CO2 data in red. The central black curve is a running 365 day average.

There is indeed some correlation in the data from 1988 until 1998, but thereafter the two trends diverge dramatically. Total atmospheric water content actually falls despite a relentless slow rise in CO2. This fall in atmospheric H2O also coincides with the observed and now widely accepted stalling of global temperatures over the last 16 years. All climate models (that I am aware of) predict exactly the opposite, so something is clearly amiss with theory. Is it not now time for “consensus” climate scientists perhaps to have a rethink ?

more to follow…

1. My thanks to Ken Gregory for help with the  data. The conversion from NetCDF was a bit of a nightmare !
2. NASA NVAP-M data is available here. Thanks to NASA Water Vapor Project-Measures (NVAP-M) team.

update 22/4:  There is no inherent reason why atmospheric water content should be correlated with CO2 content. However, models typically assume a constant relative humidity so that if surface temperatures rise (AGW) then so does the total H2O content in the atmosphere. This post highlights the lack of correlation between the measured CO2 levels and the measured H2O levels in the atmosphere.

The instrument accuracy of the NVAP-M data has been questioned in the comments. If it can really be shown that systematic errors still dominate the data, then no firm conclusions can be drawn as yet.

Posted in AGW, Climate Change, climate science, Science | Tagged , , , | 33 Comments

## Marcott – Proxy errors and conclusions

There has been a lot of discussion (see here) about the treatment of errors by  Marcott et al. through  their  Monte Carlo analysis. Since I  avoided any interpolation or Monte Carlo by using only the measured values for each proxy, the error analysis for my reconstruction is straightforward. I have calculated the standard deviations, the error on the normal  and the anomaly error for each proxy between 5000 and 4500  years BP.  The results are given here.

The typical statistical error for a 100 year bin  for a single anomaly measurement is 0.6. The statistical error on the  overall global mean can be  derived from sigma(mean) = sigma/srqt(n) where n is the number of proxy contributing to  the bin.  This numbers contributing to each bin are shown here.

Figure 1 shows the data with a 90% confidence level shading based on a 2 sigma error. The smooth curve is an FFT filter smoothing. Figure 2 shows the same result overlaid with the published graph in Science.

Fig 1: Global averaged temperature anomalies of 73 proxies with 100 year binning and redated. Shown in Yellow is the 2 sigma 90% confidence level.

Fig2; Comparison with published Marcott et al. based on Monte-Carlo derived errors.

Conclusions

The statistical errors are about 50% larger than those derived by  Marcott’s Monte Carlo analysis. However, there is  good agreement between the two approaches for  the long term trend. The previous post demonstrated that the data are insensitive to any rapid temperature variations of duration less than ~400 years.  For this reason alone there is also no evidence in the data for a 20th century uptick. A perfect renormalisation of anomalies from 5500-4500 YBP to 1961-1990 is also needed to splice on the instrument data.

Posted in AGW, Climate Change, climate science, Science | Tagged , , , | Leave a comment

## Detecting peaks in Marcott data

New: Simulation of a 20% uncertainty in proxy resolution time and peaks practically disappear.
Have there been previous warming periods over the last 10,000 years comparable to current warming ?  If so would Marcott proxy data have been able to detect  them? Tamino has a post where he attempts  to prove that indeed such peaks would have been detected and therefore there is nothing comparable to the recent warming spike. In order to show this he artificially  generated  three 200-year long triangular peaks of amplitude 0.9C  at dates 7000BC, 3000BC and 1000BC.  These were added to the underlying data and processed as before. The resultant signals would have easily been detected (he claims).

Lets do exactly the same thing on the individual raw proxy data and then process them in the same way as I described previously.  I simply increased all proxy temperatures within 100 years of a peak by DT = 0.009*DY, where DY=(100-ABS(peak year-proxy year)). Each  spike is then a rise of 0.9 deg.C over a span of 100 years, followed by a return to zero over the next 100 years. The same three dates were used for the peaks as those used by Tamino. The results are shown below for anomaly data binned in  50 year time intervals.

Fig1: Comparison of proxy anomalies (relative to -3550 to -2550BC) with and without added peaks (shown in red). The curves are smoothed 5 point averages. The blue arrows show potential real peaks in the data including the Medieval Warm Period.

The peaks are indeed visible although smaller than those claimed by Tamino.  In addition, I believe we  have been too generous by displacing the proxy data linearly upwards since the measurement standard deviation should  be properly folded in. I estimate this would reduce the peaks by ~30%.

What I find  very interesting is that there actually do appear to be smaller but similar peaks in the real data (blue arrows), one of which corresponds to the Medieval Warm Period !

New: Simulation of a 20% time resolution error in proxy measurement time.

Several people have commented that proxies measure an average temperature over an extended period rather than instantaneously, plus there is also uncertainty in the exact timing of the measurements. I have looked into this effect by simply randomizing the proxy measurements within a window of 20% of the proxy resolution. So a proxy with a resolution of 100 years is moved randomly to a  new time within +- 20 years of the recorded time. The data  is otherwise treated exactly the same way as before but now  the  new time is used to calculate the peak increment. This simulates proxy time synchronization. The result is shown below.

Fig2: Result after adding peak based on a randomized time within 0.2*resolution. Effectively only the peak at 3000BC is still discernable.

Uncertainties in the time synchronisation across proxies  effectively smears out the peaks completely. With a 20% of resolution random time shift  the pulses are essentially smeared out. Only the pulse at 3000BC remains a candidate.  Note this does not simulate long Proxy measurement times.

The code used in this post can be downloaded here. There are 3 PERL scripts.
- marcott_gridder.pl bins the data both in time(50y) and space (5×5 deg)
- marcott_global_average_ts_ascii.pl reads the grid and calculates the global averages – as plotted above.

Place the Marcott excel spreadsheet (provided with the Science Paper’s supplementary material) in the working directory. Convert it to .xls, remove the two spurious zero temperature entries in Proxy 62, and ensure the metadata sheet cells do not span 2 rows.
Type:
>mkdir station-peaks >cd station-peaks >perl ../convert.pl >cd ..
>perl marcott_gridder.pl | perl marcott_global_average_ts_ascii.pl > peaks-anomalies

The last 2 scripts are modified versions of the Met Office analysis software for CRUTEM4 – British Crown Copyright (c) 2009, the Met Office.

Posted in AGW, Climate Change, climate science, Ice Ages, Science | Tagged , , | 59 Comments

## Map of Marcott Proxies

The map below shows the location of the 73 proxies provided by Marcott el al. “Mouse over” gives the name of the station and a click on the dot shows the graph of the measured temperatures and anomalies (relative to 4500-5500 ybp). You can zoom in by dragging a rectangle over an area.

Posted in AGW, Climate Change, climate science, Ice Ages, Science | Tagged , | 4 Comments

## Evidence that Marcott’s up-tick is an artefact

I have calculated from scratch the global averaged temperature anomalies for the 73 proxies used by Marcott et al. in their recent Science paper.  The method used is described in the previous post. It avoids any interpolation between measurements and is based on the same processing software that is  used to derive Hadcrut4 anomalies from weather station data.  Shown below is  the result  covering the last 1000 years averaged in  50-year bins. I am using the published dates. Re-dating is a separate issue discussed below.

Figure 1: Detail of the last 1000 years showing in black the global averaged Proxy data and in red the HADCRUT4 anomalies. The proxies have been normalised to 1961-1990. Shown in blue is the result for the Proxies after excluding TN05-17.

There is no evidence of a recent uptick in this data.  Previously I had noticed that much of the apparent upturn for the last 100 year bin was due to a single Proxy : TN05-17 situated in the Southern Ocean (Lat=-50, Lon=6). The blue dashed curve shows the 50 year resolution  anomaly  result after excluding this single proxy.

Figure 2 shows the anomaly  data using the  modified carbon dating (re-dating). This has been identified by Steve McIntyre and others as the main cause of the up-tick.  However I think this is only part of the story.

Figure 2: Global temperature anomalies using the modified dates (Marine09 etc). Proxies are averaged in 50 year time intervals.

The new dating suppresses the anomalies from 1600-1800. There is a single high point for the period 1900-1950. The much larger spike evident in the paper around 1940 (see also here) is in my opinion mainly due to the interpolation to a fixed 20 year interval. This generates more points than measurement data and is very  sensitive to time-scale boundaries. It is simply wrong to interpolate the proxies to a 20 year time-base because most of the proxies only have  measurement resolutions > 100 years. I believe you should only ever use measured values and not generated values.

There is no convincing evidence of a recent upswing in global temperatures  in either graph based on the published or on the modified dates. I therefore suspect that Marcott’s result is most likely an artefact due to  their interpolation of the measurement data to a  fixed 20 year timebase, which is then accentuated by a re-dating of the measurements.

updated 24/3 : include re-dating graph

PERL code used in the previous post and this one can be downloaded here

2. marcott_gridder.pl generates a 5×5 grid with 50/100 year binning
3. marcott_global_average_ts_ascii.pl generates Global , NH and SH averages.
4. Step1.   >perl convert.pl
5. Step2.   >perl marcott_gridder.pl | perl marcott_global_average_ts_ascii.pl > redate-results would generate the results.
6. You can avoid the  the first convert.pl step by downloading the generated station files here. Stations-new  contains the redated stations. Stations_files contains the published dates.

The last 2 scripts are modified versions of the Met Office analysis software  for CRUTEM4 - British Crown Copyright (c) 2009, the Met Office.

Posted in AGW, Climate Change, climate science, Science | Tagged , | 16 Comments

## Marcott Study

There has been much media interest in a recent paper by Marcott et. al “A Reconstruction of Regional and Global Temperature for the Past 11,300 Years” published in Science. It seemingly shows that following a slow cooling over the last 5000 years temperatures have rapidly risen  to levels approaching the holocene maxima. The supplementary material for the paper provides details of the 73 temperature proxies used by Marcott.
I decided to re-analysed from scratch these 73 temperature proxies to check the basic anaysis method because I was concerned about the 20 year interpolation and derivation of anomalies. This turned out to be problematic due to various Excel formatting problems and in modifying pre-esisting Hadley/CRU PERL software to deal with multi-annual resolution.The proxy data have been  binned in 100 year slices  from AD -13100 to AD 1990. Normals for each proxy were calculated using a baseline of 5800 and  6200 ybp. These were then used to calculate temperature “anomalies” defined as changes in temperature relative to this baseline. The geographic averages over all proxies in each bin were combined to determine the average global, NH and SH temperatures and anomalies. Finally a renormalisation of the anomalies from the original baseline to the CRU baseline of 1961-1990 was done simply by shifting up all the anomalies by +0.3 so as to align it with HADCRUT4. The value of 0.3 was derived from the graph. I am now confident now that the analysis as published has been done correctly. The results are shown in Fig 1.

Marcott’s result (except the final uptick) is thus validated by this independent analysis.  The  cold period corresponding to the Younger Dryas period is apparent 12000 years ago.  Steve Mcintyre has already shown that the large final upturn post 1950 in the proxy data is largely due to a shifting in the published dates. Since everyone already  suspected  that the real intention was to splice these results to the instrument day,  I simply adapted   exactly the same software algorithm as used to derive HADCRUT4 anomalies.

Figure 1: Proxy temperature anomalies compared to those of Fig 1: Marcott et al. and Hadcrtut4. The proxies have been re-normalised +0.3 C to “match” 1960-1991

It should be remembered however that the absolute scale remains dependent on the assumption that the proxies line up with the instrument anomalies. A common argument as to why temperature anomalies are used instead of  absolute temperatures for the instrument data is the need to remove site dependent seasonal dependencies rather than location dependence. However for the proxy data anomalies are “needed”  precisely because of geographical biases. We can see this directly by calcluating the temperature dependence of the data.

Figure 2: Area averaged temperatures for the 73 proxies. Temperatures actually increase while anomalies decrease in the Younger Dryas period -10000 to  -12000.

The older proxies tend to be located in low latitudes giving the effect of rising temperatures earlier.  The time coverage of the proxies can be seen by comparing the the number of proxies contributing to each 100 year bin.

Fig 3: Number of proxy temperature measurements in each 100 year bin . The best coverage is between 1000 and 9000 YBP.

Now we look at the variation in trends between the Southern Hemisphere and the Northern Hemisphere over the last 12,000 years. Figure 3 shows the 1961-1990 anomaly for the global, SH and NH components. There are clear differences in the trends. The only evidence of an uptick in  temperature during the last 100 years originates from the Southern Hemisphere.

Figure 3: Detailed differences between SH and NH proxy temperature anomalies.

Global temperatures are not directly measured. Instead it is simply assumed that climate change is a global phenomenum resulting in a single linear shift in temperatures at all latitudes. This linear shift can be measured at each location by normalising to some average temperaure over a reference period. For the 73 proxies the reference period with most coverage is about 6000 years ago. This defines the zero line for their temperature anomalies. An assumption is then made that the more recent anomalies of the proxies measured into the 20th century simply line up with instrument derived proxies. This then results in a ~0.3C upward shift in the proxy anomalies, in agreement with the Marcott publication in Science. Only under these assumptions can we say that Holocene warmed to about 0.4 C higher than the late 20th century followed by a gradual cooling to the early 20th century.

Updated 23/3: Proxy 62 contains 2 spurious zeros. That was the origin of the previous dip in anomalies 10,000 years ago – fixed now.

Posted in AGW, Climate Change, climate science, Ice Ages, Science | Tagged , | 1 Comment

## Beware which way the wind blows.

Total UK electricity power demands are made available by the National Grid. A Live control panel can be seen here. These figures are a real eye opener and should be made compulsory reading by lobbyists and politicians. They highlight how the rush for wind is becoming a dangerous experiment. DECC policy may enrich vested interests at our expense but it is doomed to failure because it excludes basic electrical power engineering. Unfortunately I think Prof. David Mackay’s excellent book “Sustainable energy without the Hot Air” is partly to blame.  His book is based on good physics but it completely ignores the actual resources, costs and engineering that would be needed. He was honest enough to show that only country sized deployments of renewables could possibly meet demand, but these then would disrupt farming and the countryside. The trouble is that the green lobby actually took him seriously, and he was made chief scientist at DECC. Plans continue for very large components of renewables by 2050. The basic problem for all renewables is simple old fashioned load balancing.

Fig 1: Actual statistics on UK power consumption to the Grid from December 2012 to March 2013 on an hourly basis. Power provided by Coal fired stations are in blue. That from Gas fired power stations are in purple and the contribution from wind is in red. Note the drop off over Christmas and the fundamental uncertainty in power delivery by wind.

For the grid to work it must hold in reserve  the maximum theoretical power delivery, which for the UK  is currently ~70GW.  An all renewable grid is completely impossible  in a country like the UK lacking truly massive hydro resources. A decarbonized future in the UK without a large nuclear base load is nonsense. In the short term only fossil fuels can dispatch sufficient power  to handle the random intermitancy of wind power. Gas is the only resource which can be held in reserve to quickly meet peak demand. Wind is randomly intermittant and if it were to reach more than ~10% peak capacity would   threaten the grid with complete collapse as its output fluctuates so wildly.

Imagine that by 2050 wind is expanded 10 fold to say 40,000 turbines giving a theoretical peak capacity of  80 GWatts. This would need an investment of about 100 billion pounds plus extra subsidies on electricity costs as we will see. To dispatch power for low wind conditions we deploy only gas as the green lobby have banned nuclear. We can see what that might look like by scaling up the wind figures above.

Figure 2: Effect of a 10 fold increase in wind power on the UK grid. The rapid massive intermitancy of wind would need to be offset by fast power delivery from gas generation. A fixed nuclear base load would only reduce the scale of the rapid response needed by Gas. Shown in yellow is wind energy discarded bu still paid for.

Fine you say – all we need to do is to develop  energy storage for renewables to iron out the intermittency. Note that this energy storage problem has already existed for 50 years. The UK builds twice the number of power stations that it actually needs simply to insure that the lights never go out on the coldest day in winter. We would already be saving billions if there was a simple solution to energy storage, but unfortunately there isn’t. The scale of the problem for large wind deployment is an order of magnitude worse.

Furthermore running gas power stations in such a rapid stop and start mode is extremely wasteful in fuel. The efficiency falls fast. Leo Smith shows that the costs rise rapidly and that CO2 emissions also rise in the same way that driving a car through stop-go traffic increases fuel consumption. A grid with 50% load met by wind and 50% met by gas would cost 3 times the current price for electricity and may not actually save any CO2 emissions at all. An analysis of these effect by an electrical engineer Leo Smith can be found here.

Basic Deployment Costs from [1]:

• Gas: 6.2p per KWh
• Onshore Wind: 12.5p per KWh
• Offshore Wind: 37.6p per KWh
• Nuclear  8p per KWh
• EXTRA fuel costs incurred by WIND-GAS cooperation  6-9p per KWh

The current cost in the UK for power generation is roughly 7-8p per KWh. If we were to attempt to deploy a renewable grid offset by Gas, then these costs would need to rise by at least 400% reaching  28p/KWh.

To quote from a recent Danish  newspaper article.

The very fact that the wind power system, that has been imposed so expensively upon the consumers, can not and does not achieve the simple objectives for which it was built, should be warning the energy establishment, at all levels, of the considerable gap between aspiration and reality.

Denmark needs a proper debate and a thorough re-appraisal of the technologies that need to be invented, developed and costed before forcing the country into a venture that shows a high risk of turning into an economic black hole.

The whole energy debate has become so politicized that it is almost impossible to hold a rational discussion. There are two arguments to de-carbonize energy. The first argument relies on  the existential threat of climate change. Independent of whether this is a real threat or not, it still makes no sense for a small country like UK to act unilaterally because it alone can have no effect whatsoever on the climate. So until there is some international agreement the government  should not hinder our citizens and industries with very high energy costs all to no avail. The second argument to de-carbonizemore makes  more sense. This is that eventually  fossil fuels will run out and the world will then have to rely on non-carbon sources, so an effort now is worthwhile in the long run. However we likely have at least 100 years before this becomes an urgent issue. In the meantime there is a related argument that the UK should not have to rely on ever more expensive fossil fuels  from unstable countries,  and renewables help in this respect.  Investment in renewables in this picture is better viewed as  an insurance policy against “risks” from climate change and uncertain energy supplies.  What insurance premium would you be willing to pay ?

The premium is just too high for Wind Power –  A highly unstable National Grid costing  28p/Kwh for wind/gas to insure against a possibly beneficial 1-3 deg.c temperature rise.  It is far more sensible to opt for 80% Nuclear  at  9p /Kwh  and a stable Grid – like the French !

[1] http://www.gridwatch.templar.co.uk/index.php

Posted in AGW, Climate Change, Energy, Physics, Science, Technology, wind farms | Tagged , | 3 Comments

## Water Vapor Decline Cools the Earth: NASA Satellite Data

### Guest post by Ken Gregory P.Eng., Friends of Science.org

Original article at http://www.friendsofscience.org/index.php?id=483

An analysis of NASA satellite data shows that water vapor, the most important greenhouse gas, has declined in the upper atmosphere causing a cooling effect that is 16 times greater than the warming effect from man-made greenhouse gas emissions during the period 1990 to 2001.

The world has spent over \$ 1 trillion on climate change mitigation based on climate models that don’t work. They are notoriously poor at simulating the 20th century warming because they do not include natural causes of climate change – mainly due to the changing sun – and they grossly exaggerate the feedback effects of greenhouse gas emissions.

Most scientists agree that doubling the amount of carbon dioxide (CO2) in the atmosphere, which takes about 150 years, would theoretical warm the earth by one degree Celsius if there were no change in evaporation, the amount or distribution of water vapor and clouds. Climate models amplify the initial CO2 effect by a factor of three by assuming positive feedbacks from water vapor and clouds, for which there is little direct evidence. Most of the amplification by the climate models is due to an increase in upper atmosphere water vapor.

The Satellite Data

The NASA water vapor project (NVAP) uses multiple satellite sensors to create a standard climate dataset to measure long-term variability of global water vapor. NASA recently released the Heritage NVAP data which gives water vapor measurement from 1988 to 2001 on a 1 degree by 1 degree grid, in three vertical layers.1 The NVAP-M project, which is not yet available, extends the analysis to 2009 and gives five vertical layers. Water vapor content of an atmospheric layer is represented by the height in millimeters (mm) that would result from precipitating all the water vapor in a vertical column to liquid water. The near-surface layer is from the surface to where the atmospheric pressure is 700 millibar (mb), or about 3 km altitude. The middle layer is from 700 mb to 500 mb air pressure, or from 3 km to 6 km attitude. The upper layer is from 500 mb to 300 mb air pressure, or from 6 km to 10 km altitude.

The global annual average precipitable water vapor by atmospheric layer and by hemisphere from 1988 to 2001 is shown in Figure 1.

The graph is presented on a logarithmic scale so the vertical change of the curves approximately represents the forcing effect of the change. For a steady earth temperature, the amount of incoming solar energy absorbed by the climate system must be balanced by an equal amount of outgoing longwave radiation (OLR) at the top of the atmosphere. An increase of water vapor in the upper atmosphere would temporarily reduce the OLR, creating a forcing of more incoming than outgoing energy, which raises the temperature of the atmosphere until the balance is restored.

Figure 1. Precipitable water vapor by layer, global and by hemisphere.

The graph shows a significant percentage decline in upper and middle layer water vapor from 1995 to 2001. The near-surface layer shows a smaller percentage increase, but a larger absolute increase in water vapor than the other layers. The upper and middle layer water vapor decreases are greater in the Southern Hemisphere than in the Northern Hemisphere.

Table 1 below shows the precipitable water vapor for the three layers of the Heritage NVAP and the CO2 content for the years 1990 and 2001, and the change.

 Layer L1 near-surface L2 middle L3 upper Sum CO2 1013-700 700-500 500-300 mm mm mm mm ppmv 1990 18.99 4.6 1.49 25.08 354.16 2001 20.72 4.03 0.94 25.69 371.07 change 1.73 -0.57 -0.55 0.61 16.91

Table 1. Heritage NVAP 1990 and 2001 water vapour and CO2.

Dr. Ferenc Miskolczi performed computations using the HARTCODE line-by-line radiative code to determine the sensitivity of OLR to a 0.3 mm change in precipitable water vapor in each of 5 layers of the NVAP-M project. The program uses thousands of measured absorption lines and is capable of doing accurate radiative flux calculations. Figure 2 shows the effect on OLR of a change of 0.3 mm in each layer.

The results show that a water vapor change in the 500-300 mb layer has 29 times the effect on OLR than the same change in the 1013-850 mb near-surface layer. A water vapor change in the 300-200 mb layer has 81 times the effect on OLR than the same change in the 1013-850 mb near-surface layer.

Figure 2. Sensitivity of 0.3 mm precipitable water vapor change on outgoing longwave radiation by atmospheric layer.

Table 2 below shows the change in OLR per change in water vapor in each layer, and the change in OLR from 1990 to 2001 due to the change in precipitable water vapor (PWV).

 L1 L2 L3 Sum CO2 OLR/PWV W/m2/mm -0.329 -1.192 -4.75 OLR/CO2 W/m2/ppmv -0.0101 OLR change W/m2 -0.569 0.679 2.613 2.723 -0.171

Table 2. Change of OLR by layer from water vapor and from CO2 from 1990 to 2001.
The calculations show that the cooling effect of the water vapor changes on OLR is 16 times greater than the warming effect of CO2 during this 11-year period. The cooling effect of the two upper layers is 5.8 times greater than the warming effect of the lowest layer.

These results highlight the fact that changes in the total water vapor column, from surface to the top of the atmosphere, is of little relevance to climate change because the sensitivity of OLR to water vapor changes in the upper atmosphere overwhelms changes in the lower atmosphere.

The precipitable water vapour by layer versus latitude by one degree bands for the year 1991 is shown in Figure 3. The North Pole is at the right side of the figure. The water vapor amount in the Arctic in the 500 to 300 mb layer goes to a minimum of 0.53 mm at 68.5 degrees North, then increases to 0.94 mm near the North Pole.

Figure 3. Precipitable water vapor by layer in 1991.

The NVAP-M project extends the analysis to 2009 and reprocesses the Heritage NVAP data. This layered data is not publicly available. The total precipitable water (TPW) data is shown in Figure 4, reproduced from the paper Vonder Haar et al (2012) here. There is no evidence of increasing water vapor to enhance the small warming effect from CO2.

Figure 4. Global month total precipitable water vapor NVAP-M.

Water vapor humidity data is measured by radiosonde (on weather balloons) and by satellites. The radiosonde humidity data is from the NOAA Earth System Research Laboratory here.

Figure 5. Global relative humidity, middle and upper atmosphere, from radiosonde data, NOAA Earth System Research Laboratory.

A graph of the global average annual relative humidity (RH) from 300 mb to 700 mb is shown in Figure 5. The specific humidity in g/kg of moist air at 400 mb (8 km) is shown in Figure 6. It shows that specific humidity has declined by 14% since 1948 using the best fit line.

Figure 6. Specific humidity at 400 mb pressure level

In contrast, climate models all show RH staying constant, implying that specific humidity is forecast to increase with warming. So climate models show positive feedback and rising specific humidity with warming in the upper troposphere, but the data shows falling specific humidity and negative feedback.

Many climate scientists dismiss the radiosonde data because of changing instrumentation and the declining humidity conflicts with the climate model simulations. However, the radiosonde instruments were calibrated and the data corrected for changes in response times. The data before 1960 should be regarded as unreliable due to poor global coverage and inferior instruments. The near surface radiosonde measurements from 1960 to date show no change in relative humidity which is consistent with theory. Both the satellite and radiosonde data shows declining upper atmosphere humidity, so there is no reason to dismiss the radiosonde data. The radiosonde data only measures humidity over land stations, so it is interesting to compare to the satellite measurements which have global coverage.

Comparison Between Radiosonde and Satellite Data

The specific humidity radiosonde data was converted to precipitable water vapor for comparison with the satellite data. Figure 7 compares the satellite data to the radiosonde data for the years 1988 to 2001.

Figure 7. Comparison between NOAA radiosonde and NVAP satellite derived precipitable water vapor.

The NOAA and NVAP data compares very well for the period 1988 to 1995. The NVAP satellite data shows less water vapor in the upper and middle layers than the NOAA data. In 2000 and 2001 the NVAP data shows more water vapor in the near-surface layer than the NOAA data. The vertical change on the logarithmic graph is roughly equal to the forcing effect of each layer, so the NVAP data shows water vapor has a greater cooling effect than the radiosonde data.

The Tropical Hot Spot

The models predict a distinctive pattern of warming – a “hot-spot” of enhanced warming in the upper atmosphere at 8 km to 13 km over the tropics, shown as the large red spot in Figure 8. The temperature at this “hot-spot” is projected to increase at a rate of two to three times faster than at the surface. However, the Hadley Centre’s real-world plot of radiosonde temperature observations from weather balloons shown below does not show the projected hot-spot at all. The predicted hot-spot is entirely absent from the observational record. If it was there it would have been easily detected.

The hot-spot is forecast in climate models due to the theory that the water vapor profile in the tropics is dominated by the moist adiabatic lapse rate, which requires that water vapor increases in the upper atmosphere with warming. The moist adiabatic lapse rate describes how the temperature of a parcel of water-saturated air changes as it move up in the atmosphere by convection such as within a thunder cloud. A graph here shows two lapse rate profiles with a larger temperature difference in the upper atmosphere than at the surface. The projected water vapor increase creates the hot-spot and is responsible for half to two-thirds of the surface warming in the IPCC climate models.

Figure 8. Climate models predict a hot spot of enhanced warming rate in the tropics, 8 km to 13 km altitude. Radiosonde data shows the hot spot does not exist. Red indicates the fastest warming rate. Source: http://joannenova.com.au

The projected upper atmosphere water vapor trends and temperature amplification at the hot-spot are intricately linked in the IPCC climate theory. The declining upper atmosphere humidity is consistent with the lack of a tropical hot spot, and both observations prove that the IPCC climate theory is wrong.

A recent technical paper Po-Chedley and Fu (2012) here compares the temperature trends of the lower and upper troposphere in the tropics from satellite data to the climate model projections from the period 1981 to 2008.2 The upper troposphere is the part of the atmosphere where the pressure ranges from 500 mb to 100 mb, or from about 6 km to 15 km. The paper reports that the warming trend during 1981 to 2008 in the upper troposphere simulated by climate models is 1.19 times the simulated warming trend of the lower atmosphere in the tropics. (Note this comparison is to the lower atmosphere, not the surface, and includes 10 years of no warming to 2008.) Using the most current version (5.5) of the satellite temperature data from the University of Alabama in Huntsville (UAH), the warming trend of the upper troposphere is only 0.973 of the lower troposphere in the tropics for the same period. This is different from that reported in the paper because the authors used an obsolete version (5.4) of the data. The satellite data shows not only a lack of a hot-spot, it shows a cold-spot just where a hot-spot was predicted.

Conclusion

Climate models predict upper atmosphere moistening which triples the greenhouse effect from man-made carbon dioxide emissions. The new satellite data from the NASA water vapor project shows declining upper atmosphere water vapor during the period 1988 to 2001. It is the best available data for water vapor because it has global coverage. Calculations by a line-by-line radiative code show that upper atmosphere water vapor changes at 500 mb to 300 mb have 29 times greater effect on OLR and temperatures than the same change near the surface. The cooling effect of the water vapor changes on OLR is 16 times greater than the warming effect of CO2 during the 1990 to 2001 period. Radiosonde data shows that upper atmosphere water vapor declines with warming. The IPCC dismisses the radiosonde data as the decline is inconsistent with theory. During the 1990 to 2001 period, upper atmosphere water vapor from satellite data declines more than that from radiosonde data, so there is no reason to dismiss the radiosonde data. Changes in water vapor are linked to temperature trends in the upper atmosphere. Both satellite data and radiosonde data confirm the absence of any tropical upper atmosphere temperature amplification, contrary to IPCC theory. Four independent data sets demonstrate that the IPCC theory is wrong. CO2 does not cause significant global warming.

Note 1. The NVAP data in Excel format is here.

Note 2. The lower troposphere data is: http://www.nsstc.uah.edu/public/msu/t2lt/uahncdc.lt

The upper troposphere data is calculated as 1.1 x middle troposphere – 0.1 x lower stratosphere; where middle troposphere is: http://www.nsstc.uah.edu/public/msu/t2/uahncdc.mt and the lower stratosphere is:http://www.nsstc.uah.edu/public/msu/t4/uahncdc.ls

| Tagged , , , | 1 Comment

This post examines how radiative forcing depends on CO2 concentrations in the atmosphere. In CO2 greenhouse demystified, we calculated the effective emission height where “thermal” photons escape to space . This height depends on the lapse rate temperature and defines the outgoing radiative flux for a given wavelength. By integrating over all lines in the CO2 15 micron band we were able to derive by how much the radiative flux reduced for a doubling of CO2. This “instantaneous” energy imbalance is usually called radiative forcing. Now we study how in detail how this radiative forcing depends on CO2 concentration. Figure 1 shows CO2 induced radiative forcing varies as a function of fractional concentration (C/C0), assuming a constant surface temperature of 288 K and a constant lapse rate.

Figure 1: Radiative forcing versus the Fractional inncrease in CO2 concentration (C/C0) where C0 = 100ppm

The increase shows an approximately logarithmic dependency. A best fit to the data for CO2 concentrations from 100ppm(as reference) up to 1000 ppm(C/C0=10), assuming a fixed surface temperature and stable atmosphere gives

R.F = 6.6 log (C/C0) , where C is CO2 .

We have derived the often quoted formula in climate science for the radiative forcing for a CO2 increase from concentration Co to C. This canonical equation is given by ~5.4 log (C/C0)[1] ! OK so our result is about 20% higher – but that is pretty good IMHO, since the formula is never explained without reference to results from various GCM black boxes. Now we see here how it can be approximately derived just from changes to the emission height with increasing CO2.

Figure 2 shows the Planck spectra for a range of concentrations showing how OLR reduces with increasing CO2 to produce this dependence.

Fig 2. Change in outgoing IR spectra for a range of CO2 concentrations. Each increasing spectra has been offset by 5 mmW/m2sr-1cm-1 to better visualise the differences.

Finally we can also make an estimate for the net change in surface temperature due to CO2.

$S = \epsilon\sigma T^4$
$DS = 4\epsilon\sigma T^3 DT$
$DT = \frac{DS}{4\epsilon\sigma T^3}$

Averaging over clouds($\epsilon = 0.5$), oceans and land($\epsilon = 0.95$) we then get a global averaged $\epsilon = 0.65$

If we assume that each increment in forcing DS is offset by the same increase in Black Body radiation due to a small surface temperature rise DT, then we can iterate through long term increases in CO2 concentrations. The result is shown in Figure 3.

Fig 3: Surface temperature change induced by a gradual increase in CO2 concentrations from a starting temperature of 284K

Summary: By calculating effective emission heights for Co2 we have shown that:

1. Modern levels of 300 ppm of CO2 have resulted in an average surface temperature ~ 4 deg.C higher than an atmosphere free of CO2.

2. A doubling of CO2 from 300 – 600 ppm results in a further ~1.5C increase in surface temperatures.

Published results from more sophisticated GCMs reduce these figures by ~ 20%.

References

[1] Myhre et al. New estimates of radiative forcing due to well mixed greenhouse gasses Phys.Rev.Lett., 25, 2715-2718,1998

Posted in AGW, Climate Change, climate science, GCM, Physics, Science | Tagged , , , | 2 Comments