Land Temperature Anomalies.

This post studies the methodologies and corrections that are applied to derive global land temperature anomalies. All the major groups involved use the same core station data but apply different methodologies and corrections to the raw data. Although overall results are confirmed, it is shown that up to 30% of observed warming is due to adjustments and the urban heat effect.

What are temperature anomalies?

p1If there was perfect coverage of the earth by weather stations then we could measure the average temperature of the surface and track any changes with time. Instead there is an evolving set of incomplete station measurements both in place and time.  This causes biases. Consider a 5°x5° cell which contains a 2 level plateau above a flat plain at sea level. Temperature falls by -6.5C per 1000m in height so the real temperatures at different levels would be as shown in the diagram. Therefore the correct average surface temperature for that grid would be roughly (3*20+2*14+7)/6 or about 16C. What is actually measured depends on where the sampled stations are located. Since the number of stations and their location is constantly changing with time, there is little hope of measuring any underlying trends in temperature. You might even argue that an average surface temperature, in this context, is a meaningless concept.

The mainstream answer to this problem is to use temperature anomalies instead. Anomalies are typically defined relative to monthly ‘normal’ temperatures over a 30-year period for each station. CRU use 1961-1990, GISS use 1959-1980 and NCDC use the full 20th century. Then, in a second step, these ‘normals’ are subtracted from the measured temperatures to get DT or the station ‘anomaly’ for any given month. These are averaged within each grid cell, and combined in a weighted average to derive a global temperature anomaly.  Any sampling bias has not really disappeared but has been mostly subtracted. There is still the assumption that all stations react in synchrony to warming (or cooling) within a cell. This procedure also introduces a new problem for stations without sufficient coverage in the 30-year period, invalidating some of the most valuable older stations. There are methods to avoid this based on minimizing the squares of offsets, but these rely on least squares fitting to adjust average values. The end result however changes little whichever method is used.

The situation in 1990

CRU had developed their algorithms several years before 1990 when NCDC first released their version (V1) of their station data in 1989. This contained 6000 station data many of which overlapped stations provided by CRU. I have re-calculated the yearly anomalies from V1 and compared these to the then available CRU data(Jones-1988) as shown in Figure 1. The first IPCC assessment (FAR) was written in 1990 and essentially  at that time there was no firm evidence for global warming. This explains why the conclusion of FAR was that any warming signal had yet to emerge.

Figure 1: First release of GHCN (1990) compared to Jones et al. 1988.

Figure 1: First release of GHCN (1990) compared to Jones et al. 1988.

We see that all the evidence for global warming has arisen since 1990.

Situation Today

The current consensus that land surfaces have warmed by about 1.4C since 1850 is due to two effects. Firstly the data from1990 to 2000 has consistently shown a steep rise in temperature of ~0.6C. Secondly the data before 1950 has got cooler, and this second effect is a result of data correction and homogenization.

The latest station temperature data available are those from NCDC (V3) and from Hadley/CRU (CRUTEM4.3). NCDC V3C is the ‘corrected & homogenized’ data while V3U is still the raw measurement data. I have calculated the annual temperature anomalies for all three datasets: GHCN V1, V3U (uncorrected) and V3C (corrected). The CRUTEM4 analysis is based on 5549 stations, the majority of which are identical to GHCN. In fact CRU originally provided their station data as input to GHCN V1.

Figure 2 shows the comparison between the raw(V3U) and CRUTEM4 while Figure 3 compares V1 and V3U with the corrected V3C results.

Figure 2: Comparison of V3raw with CRUTEM4.3.

Figure 2: Comparison of V3raw with CRUTEM4.3.

 

 

 

 

Figure 3 Comparison of V1, V3U and V3C global temperature anomalies. V3C are the red points

Figure 3 Comparison of V1, V3U and V3C global temperature anomalies. V3C are the red points

The uncorrected data results in ~ 25% less warming as compared to V3C. This is the origin of the increased ‘cooling’ of the past. The corrections and data homogenization have mostly affected the older data by ‘cooling’ the 19th century. There is also a small increase in recent anomalies in the corrected data. A comparison of V3C, GISS, BEST and CRUTEM4 gives a consistent picture of warming by ~0.8C on Land surfaces since 1980. Data corrections have had a relatively smaller effect since 1980.

Figure 4: Comparison of all the main temperature anomaly results

Figure 4: Comparison of all the main temperature anomaly results

Relationship of Land temperatures to Global temperatures

How can the land temperature trends be compared to ocean temperatures and global temperatures ? The dashed curves in figure 4 are smoothed by a 4-year wavelength FFT filter. We then average all these together to give one ‘universal’ trend shown as the grey curve.  The result is a net 0.8C rise since 1950 with a plateau since 2000 (the Hiatus). Let’s call this trend L.

Now we can take the ocean temperature data HadSST2 and perform the same FFT smooth giving an ocean  trend ‘O’. We can define a global trend ‘G’ based on the fractions of the earth’s surface covered by land and that covered by ocean.

G = 0.69*O + 0.31*L

Next we can simply plot G and compare it to the latest annual Hadcrut4 data.

P6

It is a perfect fit! This demonstrates an overall consistency between the ocean, land and global surface data. Next we look at other possible underlying biases.

CRUTEM3 and CRUTEM4

There are marked differences between CRUTEM3 and CRUTEM4 for 2 main reasons. Firstly CRU changed the method of calculating the global average. In CRUTEM3 they used (NH+SH)/2 but then switched to (2NH+SH)/3 for CRUTEM4. The argument for this change is that land surfaces in NH are about twice the area of those in SH. The second reason for the change moving to CRUTEM4 was the addition of ~600 new stations in arctic regions. Arctic anomalies are known to be increasing faster than elsewhere. This is also the main reason why 2005 and 2010 became slightly warmer than 1998. CRUTEM3 has the hottest year being 1998.

This process of infilling the Arctic with more data has been continued with Cowtan and Way[3] corrections to CRUTEM4. This is possible because linear (lat,lon) grid cells multiply fast near the poles.

Figure 5 Differences between CRUTEM3, CRUTEM4 and different global averaging methodologies. All results are calculated directly from the relevant station data.:

Figure 5 Differences between CRUTEM3, CRUTEM4 and different global averaging methodologies. All results are calculated directly from the relevant station data.:

To summarise

  1. ~0.06C  is due to a large increase in Arctic stations
  2. ~0.1C is simply due to the change in methodology

Arctic stations have large seasonal swings in temperature of 20-40C. Interestingly the increase in anomalies seems to occur in winter rather than in summer.

Urban Heat Effect

UHI

Fig 6: Growing cities appear to be ‘cooler’ in the past when converted to temperature anomalies

The main effect of Urban Heat Islands(UHI) on global temperature anomalies is to ‘cool’ the past. This may seem counter-intuitive but the inclusion of stations in large growing cities has introduced a small long term bias in global temperature  anomalies. To understand why this is the case, we take a hypothetical rapid urbanization causing a 1C rise compared to an urban station after ~ 1950 as shown in figure 6.

All station anomalies are calculated relative to a fixed period of  12 monthly normals e.g. 1961-1990. This is independent of any relative temperature differences. Even though a large city like Milan is up to 3C warmer than the surrounding area, it makes no difference to their relative anomalies.

Figure 7: comparison V3U(blue) V3C(red) CRUTEM4(green). Top graph shows adjustments

Figure 7: comparison V3U(blue) V3C(red) CRUTEM4(green). Top graph shows adjustments

Such differences are normalized out once the seasonal averages are subtracted. As a result ‘warm’ cities when graphed as anomalies appear to be ‘cooler’ than the surrounding areas before 1950. This is just another artifact of using anomalies rather than absolute temperatures. A good example of this effect is Sao Paolo, which grew from a small town into one of the world’s most populated cities. There are many other similar examples – Tokyo, Beijing etc.

Santiago, Chile. NCDC corrections increase 'warming' by ~1.5C compared to CRUTEM4

Santiago, Chile. NCDC corrections increase ‘warming’ by ~1.5C compared to CRUTEM4

To investigate this effect globally I simply removed the 500 stations showing the largest net anomaly increase since 1920, and then recalculated the CRUTEM4 anomalies. Many of these ‘fast warming’ stations are indeed large cities especially in developing countries. They include Beijing, Sao Paolo, Calcutta, Shanghai etc. The full list can be found here.

UHI-trend

Figure 8 The red points are the standard CRUTEM4.3 analysis on all 6250 stations. The blue points are the result of the same calculation after excluding 500 stations showing greatest warming since before 1920. These include many large growing cities .

Figure 8 The red points are the standard CRUTEM4.3 analysis on all 6250 stations. The blue points are the result of the same calculation after excluding 500 stations showing greatest warming since before 1920. These include many large growing cities .

The results show that recent anomalies are little affected by UHI. Instead it is the 19th century that has been systematically cooled by up to 0.2C. This is because large cities had mostly already warmed before 1990. Anomalies just measure DT relative to a fixed time.

Data Adjustments and Homogenization.

There are often good reasons to make corrections to a few station data. Some individual measurements are clearly wrong generating a spike. These ‘typo’ errors are easy to detect and correct manually. Data homogenization, however, is now done by automated procedures, which themselves can lead to potential biases. Homogenization will smooth out statistical variations between nearby stations and accentuate underlying trends. Systematic shifts in temperature measurements do occur for physical reasons, but detecting them automatically is risky.

  1. Station can move location causing a shift especially if altitude changes.
  2. New instrumentation may be installed.
  3. Time of Observation (TOBS) can change – this is mainly a US problem.

The NCDC algorithm is based on pairwise comparisons with nearby sites to ‘detect’ such shifts in values. However, if this process is too sensitive then small shifts will occur regularly. This seems to be the case in the 2000 cases I have looked at. These shifts are systematically more negative in the early periods. A priori there should be no reason why temperatures 100 years ago could not be as warm as today. Yet the algorithm systematically shifts down such cases. One exception is some large cities such as Tokyo where a strong warming trend is flagged as unusual. As discussed above the net effect on anomalies is still to cool the early periods.

Some examples are shown below. Vestmannaeyja is an Icelandic station. The graphs shown are green for CRUTEM4 , Red for V3C and blue for V3U.

Vestman

The algorithm has detected a ‘shift’ around 1930 lowering measured temperatures before that date.

The graphs in each figure are ordered as follows. The bottom graph shows the monthly average temperatures. The next 2 graphs are the monthly anomalies followed by the yearly anomalies. Finally the top graph shows the adjustments made to the yearly anomalies by NOAA (red) and CRU (green).

Santiago-Pudahel

Santiago, Chile. Apparent warming has been mostly generated by the NCDC adjustments themselves. Is this justified?

Most of the NOAA adjustments are simple step functions that shift a block of measurements up or down. These are the sort of corrections expected for a station move, but have been applied to all stations. The algorithm searches out unusual trends relative to nearby stations and then shifts them back to expected trends. Mostly it is the early data that gets corrected and most corrections are predominantly negative. This explains the origin of the 25% increase in net global warming for V3C compared to the raw data V3U. Some changes are justified whereas others are debatable. One example where the correction algorithm generated an apparent warming trend which was not present in the measurement data is Christchurch in New Zealand.

Christchurch

Christchurch, New Zealand. The top graph shows the shift introduced by NCDC correction algorithm. There seems to be no apparent justification for the two shifts upwards made in 1970 and 1990. Without these shifts there there is no warming trend in the data after 1950.

The pair wise algorithm looks for anomalous changes in temperature data by comparing measurements with nearby stations. Since some of the stations are also affected by UHI, there remains the suspicion of a past cooling bias also infecting the correction algorithm.

Conclusions

Measuring global warming based on temperature anomalies is a bit like deciding whether the human population is getting taller based on a random sample of height measurements. Despite this, there can be little doubt that land surfaces have warmed by about 0.8C since 1970, or that since 2000 there has been little further increase.

Data correction and homogenization as applied to station data have suppressed past temperature anomalies and slightly increased recent ones. The net result is to have increased apparent global warming since the 19th century by 25%.

The Urban Heating Effect also decreases apparent values of past temperature anomalies by ~0.2C while leaving recent values mostly unchanged. This is due to a re-normalization bias of the zero line.

CRU changed their definition of the Land global average in the transition from version 3 to 4. This and the addition of 600 new arctic sites explain the increase in recent temperature anomalies.

Data sources:

  1. NOAA GHCN V3 : https://www.ncdc.noaa.gov/ghcnm/v3.php
  2. CRUTEM4 : http://www.metoffice.gov.uk/hadobs/crutem4/data/download.html
  3. Cowtan & Way http://www-users.york.ac.uk/~kdc3/papers/coverage2013/
This entry was posted in AGW, Climate Change, NOAA, UK Met Office. Bookmark the permalink.

71 Responses to Land Temperature Anomalies.

  1. Ron Graf says:

    Clive, thanks for returning to looking at how the land record sausage is made. My suspicion is that land cultivation causes as large a relative UHI as does paving roads and constructing homes. This continuous process over the last 150 years is gradual and affects stations one at a time in different instances. There is no meta-data to record when a new runway or concourse is added to the airport where the weather station is.

    Clearly the ad-hoc or automated corrections are inadequate for such a powerfully confounding signal to global warming. Whereas UHI does little to the diurnal max temperature, Tmax, and daytime has a better mixing of the air in the lower troposphere, Dr. John Christy has recently proposed measuring global warming by just looking at trend anomalies of Tmax.

    Another solution I would propose is to do a UHI analysis on a large sampling of land stations and estimate what the net warm bias would be caused relative to a sampling of stations in 1850 using historical geographic knowledge of the stations. Then simply subtract the net UHI gain in the average station and adjust the final global warming number by that.

    What do you think of each of these options and perhaps others that could improve the confidence in the land record?

    • clivebest says:

      Of course it is true that those parts of the world with long term weather stations all tend to be in countries which have seen rapid development in the 20th century, whether that be in cities or near urban areas. Even those in the countryside must be slightly affected by the increase in mechanized farming and transport. When I redid the CRUTEM4 analysis after removing 500 of the fastest warming stations (mostly cities) net warming since 1850 reduced by 0.2C. So that is my best estimate of UHI on land. However global temperatures are affected more by sea surface temperatures . It is hard to imagine that shipping and human development could have changed these on their own. As a result UHI is an interesting effect to study and is important for land based measurements. However its net effect on global temperatures including oceans must be rather small ~ 0.1C at most.

      I think the natural warming caused by recovery from the LIA may be more important, but this is ignored by IPCC.

  2. Ron Graf says:

    Here is a string back in Jan 2016 where I and another blogger dogged Steven Mosher on UHI until he stopped playing hide the pee, and in a moment of late-night weakness he told us his life’s story on UHI in uncharacteristically earnest fashion:

    Yes, that is the mystery [that UHI is real but does not affect the data] as Peterson termed it. Go ahead and read back to everything I wrote about Peterson.

    We all acknowledge that UHI is real. Some argue it is potentially dangerous. To make that case they do tend to focus on worst case UHI. Since we see UHI in individual records it seems OBVIOUS that the global record should show some sign of it.. after all there are plenty of urban sites in the total average. But when we use the same method , that found the signal in individual cases, and apply it to global data, the signal gets attenuated.. to the vanishing point.
    To me it is still a mystery of sorts. I was relieved when I found a small signal ( .03C/decade) in the US. when I unleashed that same method on the globe….. POOF that small effect disappeared.
    Ross had a different method he thought could work to pull out the signal.. I still play around with that. But it is frustrating to work for a few months.. load up the data and POOF, get the same answer.

    For me here is where it stands:

    A) Our expectations for average UHI over the life of a station may be too high. We are probably conditioned to expect to see values like 2C.

    B) Studies that show UHI in a city get published more easily than
    those that dont. But also see a study of 419 large cities using
    LST as opposed to SAT. The average effect is less than 2C

    C) If a UHI of 2C hits Tmin… and Tmax is un effected.. then
    Tave will see 1C.

    D) if 2C is UHI max and it hits Tmin, but only for 10% of the year
    you are down to a .1C effect in Tave.

    E) Rural/urban isnt either or. That is you can have rural sites
    that are biased by micro site and urban sites that are in cool zones.
    Point E is especially important
    Suppose you have 20 sites
    10 urban with a trend of 1C
    10 rural with a trend of zero
    Now compare them and the difference is apparent.
    Now suppose that you misidentify a rural as urban and an urban as rural
    Urban is now 9 with a trend of 1C and one with a trend of 0
    Rural is now 9 with a trend of 0 and one with a trend of 1C
    Now do the comparison..
    Next imagine the difference isnt 1C but .1C and you see that
    errors in identifying or misclassification can obscure the difference.
    Bottom line. It looks like a simple problem. Divide the raw data into two piles and compare. Opps.. how the hell did that NOT WORK.
    So people cant ask me to be quiet about the test and what didnt work. I went into this thing to prove peterson and Parker wrong.
    Look at all I wrote about Parkers study at climate audit. I went into this thinking they were wrong obviously wrong and so showing that would be easy. two piles.. easy peasy. rural only easy peasy. Only I failed. YEARS OF FAILURE.. finally i found something with zeke and nick.. sure it the US only.. so take that same method and use it in Berkeley earth DOH! they invited me to visit

    I will tell you a funny story.
    After I gave Rhode my classifier he went away and did the study
    i forget the numbers so I’ll just use X and Y, say X is .1 and Y is .15
    he walked to the board.
    He wrote down
    Berkeley Earth UHI -.1C
    Mosher ?.15C
    and then he said.. guess the sign on Mosher’s approach.
    haha it was negative. So the same filter that found UHI in the US
    didnt find it globally. Imagine how I felt.
    Anyway.. its a mystery. There are some explainations. I’d love to find .15C of UHI in the land record.. that would be .05C in land and ocean.. err ya.. not the kind of result that changes science.
    Suppose I found .3C? that would be .1C in land ocean. Not a game changer.
    Suppose i found 1C.. Now, that would change land ocean by .3C
    that is an interesting result.
    So you see why I agree with DEwitt on where the action and debate is? You’d have to find that 1C of all the warming since 1850 was
    UHI to have an interesting result for the science. Not gunna happen. there was an LIA.
    He [DeWitt] wrote.

    I disagree with a lot of Steven Mosher’s points; mainly that not finding it in the record using an Menne type approach means that it vanished and never was. I think the Occam’s Razor explanation would be that rural land use trends UHI almost equally to urban expansion, even more so in the world at large where cultivation had to keep up with population.

    On the last point, if UHI were found to be 0.3C of the land record, and knowing that land is more responsive to climate change that the sea, and knowing we only have good sea data for the last 10 years, Occam’s first thought on this would be to look for sea record bias.

    • Clive Best says:

      Thanks Ron,

      That’s very interesting. Stephen Mosher can often come over as being rather arrogant, but here we see what he really thinks. I am sure that some of the mystery of UHI lies in the use of temperature anomalies rather than measured temperatures.

      I once did a simple calculation of the global effect of humanity on temperatures by using IEA value for total energy consumption. It averages at about 20,000 GW of heating power. If we say this is spread evenly over 100 million km2 of land (half the northern hemisphere) then it works out at 0.2 W/m2. This is small but not insignificant when you consider that the bulk will be concentrated in towns and cities.

      According to google urban areas cover 0.5% of world’s land area or 2.5 Million km^2. This brings manmade direct heating up to 8 W/m2 which is significant.

      For UHI everything depends on the sampling locations of the weather stations.

  3. Pingback: Land Temperature Anomalies – Climate Collections

  4. catweazle666 says:

    “This and the addition of 600 new arctic sites explain the increase in recent temperature anomalies.”

    Are these “real” sites or specious sites generated by infilling using Kriging or some other AlGoreithmic variety of making stuff up so beloved of climate “scientists”?

  5. Clive Best says:

    CRU added 600 real sites but they also dropped 176 real sites mainly in the US and South America when they moved from CRUTEM3 to CRUTEM4. You can see the difference that made to the trend below.

    Then along came Cowtan & Way who went the whole hog and also used Kriging to infill all available polar grid cells. However they didn’t do that for Antarctica where there is less evidence of warming.

  6. erl happ says:

    I quote you Clive: ‘Interestingly the increase in anomalies seems to occur in winter rather than in summer.’

    You have done the hard yards of checking the work of others in a capable and careful fashion. Great, but there is another way to look at this problem.

    I would urge you to look at anomalies by the month in each latitude band. As many or as few latitude bands as you can cater for, but at a maximum of 30° of latitude at a time in which case there will be six zones to examine. You will see that between 30° south and the Arctic temperature variability is greatest in the months of January and February and I am not talking tiny differences. Massive differences. If you look at the data in decadal intervals you will see both cooling and warming.

    Between 30° south and the south pole variability is much greater in July-August and greater than in the Arctic. Again, looking at decadal intervals you will see both cooling and warming.

    Variability increases dramatically with latitude either side of the 30° latitude band. The variability is as you noticed, greatest in winter.

    Now, the differences are astounding and point to a mode of climate change that originates at the poles in winter. Temperature adjustments performed by various authorities in a quest to construct a ‘global average’ are of little interest when judged against the differences you will see when you look at temperature change by the month.

    The pattern of change in surface temperature is at odds with the suggested cause, back radiation from greenhouse gases that are well mixed and should therefore promote an increase in temperature that is similar across the year and regardless of latitude.

    Aggregation has its problems. The chief of these is the obliteration of all traces of cause and effect, patterns of variability, any difference in the behaviour of temperature between the day and the night and according to the seasons. Aggregation takes us further away from the task of making a reasonable assessment of both cause and consequence.

    Warming in winter in mid to high latitudes represents an improvement in the habitability of the planet. The growing season is longer. Somehow we lose sight of that. Warming in very high latitudes that are so cold as to be uninhabitable is of no real interest if the temperature stays below the freezing point of water.

    Discovering the cause of the change that has occurred, that in my mind is entirely natural in origin, is necessary in order to assess any possible impacts by mankind.

    I reproduce my own assessment of the nature of temperature change based on reanalysis data here: https://reality348.wordpress.com/2016/01/15/8-volatility-in-temperature/

    I then go on to explore the natural processes that are clearly involved.

    • Clive Best says:

      Yes warming is very much latitude and seasonally dependent. As you show in your report most variability occur in winter in polar regions. Without any solar heating the stratosphere descends almost to ground level. I am not sure I understand your explanation based on Ozone however. How does ozone drive say the Jet Stream ?

      Here is an animation of all Hadcrut4 data and you see very well how variability shifts north south with the seasons. It is also noticeable how little warming occurs in the tropics. I have the feeling that sea surface temperatures are limited to 30C by the Claudius-Clapeyron equation.

      https://www.youtube.com/watch?v=l4cCWz878f0

      Clive

  7. erl happ says:

    Clive, you ask how ozone drives the jet stream.

    The movement of the air is a response to differences in atmospheric density that is reflected in surface pressure.

    The greatest variations in atmospheric density actually manifest not at the surface but in the region of the tropopause. To be accurate, its in a broad region between 7km in elevation and 20 km in elevation. That is where we find the Jet Stream.

    Why is it so that the largest differences in density manifest at this level?

    Gordon Dobson observed back in the 1920s, as others had done before him that ozone maps surface pressure. It is the upper half of the atmospheric column that contains most of the ozone.

    Ozone absorbs some of the abundant radiation that emanates from the Earth itself at 9-11 um and imparts that energy to surrounding molecules. The greater the atmospheric density the more efficient is the process. So ozone heats the air more efficiently near the surface than it does aloft.

    It is the cold dense air of higher latitudes that has the greatest total column ozone giving rise to the lowest atmospheric pressure.

    It is the warm, less dense air of the mid and low latitudes that has the least ozone aloft. So, here surface pressure is higher.

    It is apparent that the circulation of the air about low and high pressure cells is a direct product of the ozone content of the air aloft.

    It is observed that the surface warms when geopotential height increases at 500hPa or 250hPa. GPH directly reflects the temperature of the air below a given pressure level. Ozone contributes to the temperature of the air as it descends in high pressure cells. If the partial pressure of ozone changes in high latitudes it changes across the globe because at high latitudes ozone drives uplift. What goes up must come down. There are extremely vigorous winds in the stratosphere that vary in direction according to altitude. Hence the ability of Google to guide balloons about the globe.

    The moisture level in the upper atmosphere changes very little but the temperature of the air changes a lot. And so therefore does cloud cover.

    If you look to what drives the planetary winds it is the interchange of atmospheric mass between high and other latitudes. This is known as the annular modes phenomenon (colloquially the Arctic Oscillation). It is the strength of polar cyclone activity that is generated at the polar front, ozone rich air on one side and ozone poor on the other, that drives the generation of these cyclones. They propagate downwards from the jet stream altitude.If these cyclones strengthen they shift atmospheric mass from the poles to lower latitudes and into the summer hemisphere.

    Probably more than you bargained for. Consider it a bonus. Be very happy if you could comment on, query or even contradict what I say at https://reality348.wordpress.com

    • climategrog says:

      This is very interesting Erl. I will have a closer look at your site. There’s a lot to digest.

      I have shown that the late 20th c. warming that caused all the alarm seems related to the cooling of the stratosphere caused by the two major eruptions.

      https://climategrog.wordpress.com/uah_tls_365d/

      Destruction of stratospheric ozone seems to be a key part of this process.

      • erl happ says:

        Greg, the stratosphere is a complex household and it does it no justice to reduce its thermal behaviour to a single statistic. The devil is in the detail and in particular at 60-70° south latitude in terms of surface pressure and also in terms of Antarctic polar cap temperature between 30hPa and 10 hPa.

        The ozone content of the entire stratosphere is driven by the interaction between a tongue of mesospheric air that lies over the continent and the ozone rich air on its perimeter. Between the two we have a chain of Polar Cyclones……..the canary in the climate change coal mine.

        If one studies the variation in GPH globally all change begins and is preceded by change over the Antarctic.

      • erl happ says:

        Re: “Destruction of stratospheric ozone seems to be a key part of this process.”

        The notion that the ozone content of the stratosphere has suffered at the hands of man should be given the closest scrutiny.

        The ozone hole that manifests over the Antarctic continent between 10 and 23km in elevation in October is actually a product of the circulation of the air below 50hPa that has been a feature of the Antarctic circulation since prior to to release of agents into the atmosphere that can activate chlorine chemistry. There is good reason for a fluctuation in the size of that hole that relates to wholly natural aspects of the Antarctic circulation under a regime of changing ozone partial pressure.

        The ‘hole’ is a natural feature of the circulation that is always present at the time of the final warming of the stratosphere. In fact, the hole is just a special case of the sort of depletion of ozone that we see under a high surface pressure regime in the mid latitudes on a year round basis. It is in fact due to the uplift of NOx from the near surface atmosphere as I establish here: https://reality348.wordpress.com/2016/05/14/23-the-dearly-beloved-antarctic-ozone-hole-a-function-of-atmospheric-dynamics/

        • climategrog says:

          My point here is that the destruction of ozone is the result of two punctual natural events , not anthropogenic.

          The Montreal Protocol and ensuing crusade against CFCs was based on typically naive “trend” analysis where at straight line is drawn through that data and spuriously correlated to an increase in CFC production.

          Climatology is obsessed with straight lines which are then the basis of assuming that an artificially induced correlation is proof of causation.

          Like you say the devil is in the detail and the devil in this case seems to be stratospheric eruptions, not CFCs.

          Once informed by this effect we can see that the late 20th c. rise in lower tropo temps may well have the same cause:

    • climategrog says:

      I’ve been saying for a few years that the idea of ‘polar amplification’ is looking at things backwards. The changes are larger in the Arctic because they *start* there. There is no ‘amplification’, just the usual attenuation further from the source.

      There are some interesting similarities between global CO2 change ( per MLO ) and the Arctic Oscillation, with several years lag.

      • erl happ says:

        That appears to be a quite striking relationship. Here is part of a possible rationale:

        The ability of a liquid to dissolve a gas and the gas that can be dissolved depends upon the temperature of the liquid.

        Rising AO index indicates lower surface pressure over the Arctic Ocean and an increased flow of warm mid latitude air towards the Arctic. This is due to ozone enhancement, increasing polar cyclone activity shifting atmospheric mass to the mid latitudes and reducing cloud cover over the oceans in the process. As surface pressure rises in mid latitudes so does geopotential height. There is a known relationship between GPH and surface temperature and it works by changing cloud cover. Warm air will hold more moisture. More moisture held in a gaseous form means less cloud in solid ice crystal form. Result is more insolation at the surface and a warmer skin tempeature.

        Warm waters in the ocean will hold less CO2 so the increase in atmospheric CO2 will be enhanced due to slowed absorption.

        But I see no reason for a lag.

        However, the matter is complex because the rate of exchange of waters between the tropics and high latitudes depends on the planetary winds that increase in velocity with mid latitude pressure. It is possible that the main field for absorption of gas is where waters are very cold rather than for the ocean as a whole. That is in the southern Ocean adjacent to Antarctica rather than in the Pacific and Atlantic Oceans in the northern hemisphere. In the winter a lot of that very cold ocean adjacent to Antarctica gets covered by ice.

        • climategrog says:

          Thanks, I’m aware of the out-gassing dCO2 vs SST effect:

          It dominates shorter time scales. The question is how much it decreases with frequency. Here is a first look:

          https://climategrog.wordpress.com/d2dt2_co2_ddt_sst-2/

          Looking at numbers and actual observations, it seems that intermediate values are the most sensitive to changes in SST. Cold water absorbs most but fairly constantly so, the temperature dependence is far greater in change over from out-gassing to absorption in more temperate waters.

          I have not looked into the AO connection in more detail but the correlation in the post 1998 section is quite striking, especially since it mimics the wiggle as well as the general ups and downs.

          I suppose some loon could come along and suggest that dCO2 causes AO. It’s about the only thing that they have not already said to be caused by fossil fuel burning !!

          • erl happ says:

            The effect of the AO in generating extreme variability in surface temperature in January-February extends all the way through to 30° south latitude. These are the months where variability is greatest, and the effect is very marked. I believe this is due to the highly ozone charged nature of the northern hemisphere. This alone should start one looking for the origins of that extreme variability in January at what is happening in the Arctic in the middle of winter and how it results in the extremes in geopotential height variation in high and mid altitudes in those same months and how this relates to cloud cover.

            GPH and surface temperature vary together in a long observed relationship.

            This is natural climate change in action. Once one understands this source of natural variation and looks at its product there is no need to invoke any other explanation for the changes observed.

      • erl happ says:

        Re ‘Polar Amplification’. I agree with what you say. Its the other way round. Change originates at the poles and it is marked with a winter apostrophe.

        One is aware that the largest response is always seen close to the stimulus and the wave activity falls away the further you are from the point where the stimulus is applied.

  8. climategrog says:

    G = 0.69*O + 0.31*L

    The problem with this arithmetic approach is that it is non-physical.

    We are not interested in an abstract statistical average if we are trying to asses climate “forcings” and their effects, we are interested in heat content.

    Land has about half the heat capacity of ocean so the 30% weighting is about twice what is should be to assess radiative forcing.

    https://judithcurry.com/2016/02/10/are-land-sea-temperature-averages-meaningful/

    In particular, see the update at the end of the artice:

    A typical equation for the definition of the settled change in temperature in response to a change in radiative ‘forcing’ F has the form:

    \Delta F = \lambda * \Delta T + \Delta N ; where \Delta N is the change in top-of-atmosphere radiation.

    \lambda is the reciprocal of climate sensitivity ( CS ) . A more realistic model to asses the effect of differing responses would be :

    \Delta F = \alpha * \lambda_{land} * \Delta T_{land} + (1 - \alpha ) * \lambda_{sea} * \Delta  T_{sea} + \Delta N

    Greg Goodman

    • climategrog says:

      Oh dear, WP has screwed the formulae. The question marks are deltas. See the article at Judith’s to where WP did not screw it up.

      Greg.

    • Clive Best says:

      OK fixed the formula.

      Yes I agree that is the correct formula for the energy balance. I was just pointing out that when the various groups calculate the average global temperature anomaly, all that they do is to simply take an area weighted average of the sea and land anomalies combined. So the end result is just T_{global} = \alpha * T_{land} + ( 1 - \alpha ) * T_{sea}

  9. climategrog says:

    Thanks for this article. I did not realise that Had CRUFT4 was using Cowtow and Wayoff’s Arctic shenanigans now.

    Since the alarmists have no answer to the pause in warming without admitting that their models are busted they are going with wholesale rigging of the climate record. Mears has rigged RSS ; Karl et al thinks he can “correct” round the clock SST by using non representative NMAT and now HadCRUFT are rigging the land record.

    They have no place left to run but are prepared to take environmentalism and science down with them.

  10. Ron Graf says:

    Hi Greg, good to see you. I just read your Feb article at Climate Etc. linked above. I am wondering is you have put any time into analyzing UHI and DTR? Also, do you have a thought about the proper factors for converting land station, SST trends to lower troposphere temp trends? Do you think that the transfer of heat from ocean to land through horizontal flow of latent heat moisture is comparable to the transfer of heat from land to ocean by increased Tavg due to higher CS? And, lastly, since there is a different CS for land and sea, as you outline in the article, could not these two independent values be statistically diagnosed? And, if so, could they not be used as two points on a line slope to extrapolate ECS when comparing to the Keeling curve?

    I am not ignoring the huge task of cleaning the land record of UHI and inhomogeneities. Perhaps if the TLT to land and TLT to sea surface had a strong theoretical basis UAH data could be used.

    • climategrog says:

      Hi Ron.

      Firstly I don’t think there is any significant heat transfer from land to ocean via the air. Mass ratio and specific heat capacity make air insignificant in those terms. The most it could do is affect SST by modulating evaporation.

      Horizontal heat transfer from sea to land is very significant. The warming of the Antarctic peninsula which Steig et al foolishly managed to spread around the whole continent comes from wind driven Frohm effect. Also see Chinook winds. This effectively warms land air temps using OHC via latent heat. Inter-annual change in wind intensity will change mean land air temps, with negligible change to SST. Thus ‘global average temp’ will change without any energy having entered or left at TOA. This will likely be falsely attributed to some radiative ‘forcing’.

      Satellite extractions of TLT are probably a more physically meaningful average of land and sea though they fail to register changes in latent heat content.

      So if we are looking for a calorimeter to assess the effects of radiative ‘forcing’, I think SST, though imperfect, is the best approach. OHC is better but generally still too short to be much use.

      “statistically diagnosed? ”
      Well I did a rough comparison of d/dt(BEST land) and d/dt(SST) and got 2:1 . Some published research has tried to estimate effective heat capacity of land , see refs in article.

      I don’t see much point in an ‘average’ land and sea temperature. It is physically meaningless for radiation calculations and as physical world measurements you either want to know about land temps or sea temps. No creature on Earth lives in an average of the two. Some live in BOTH but nothing lives in the average.

      Greg

      • erl happ says:

        No creature on Earth lives in an average of the two. Some live in BOTH but nothing lives in the average.

        There is a giggle. I like it.

        • climategrog says:

          I’m not joking. The main “index” that is used to assess global warming aka climate change and is now enshrined in the Paris agreement to remodel the world economy is based on a quantity that has no radiative or environmental applicability.

          It’s joke all right. But it does not make me giggle. 🙁

      • climategrog says:

        “… could they not be used as two points on a line slope to extrapolate
        ECS when comparing to the Keeling curve?”

        One could derive a separate ECS for both land and sea. That would at least tell us how much change to expect in each environment. However, land will always be constrained by the air coming off the oceans.

        Currently they are just spuriously inflating ECS by non physical mixing of the two records. This also means that they are further polluting the SST record with the UHI issues.

  11. Ron Graf says:

    Greg, I’m not sure you understood my point. It seems there is a huge debate about whether ECS is closer to 3C or 1.8C. Since Clive did his study of the temp indexes last year, showing that land is warming faster than sea, I have been thinking that land is in an accelerated state down the trend that ocean should follow. Of, course this assumes that non GHG forcings remain generally steady. So, not only could land T be at a separate point on the slope from sea T, one should be able to create land T bins defined by their distance from sea influence to plot the gradient. Not only could this be a tool to extrapolate ECS but it would be a validation check to compare satellite TLT to surface to see if they gradient is a match. If the gradient is screwed up, for example, showing more warming in coastal cities than inland perhaps this could be evidence of homogeneity or contamination by UHI, or showing the Frohm effect, not AGW. It just seems there is a lot more information potential in the data then just global Tavg.

    • climategrog says:

      Yes, there’s a huge debate about ECS, that’s the whole climate question. Land temps are roughly twice as volatile as I showed in the graph in my article:

      This does not mean that they are “further down the curve”. It means that they move twice a much under the same radiative forcing ( if indeed everything can be explained by radiative forcing ) because of less specific heat capacity. ie they are more volatile.

      Ultimately it is ocean temps and OHC that will determine TCS and ECS, not land. It is debatable how far land could deviate from the constraints of the largest heat reservoir on the planet. Horizontal heat transfer will ensure that it can not wander too far and it will not be a simplistic linear relationship like twice the SST change if we look in detail.

      Even land is not the same everywhere. Deserts will be about 4x SST since they are dry ; coastal areas will be much more tightly bound to SST.

      The problem is this obsession with meaningless ‘global averages’ is not physically real either. Averaging will effectively reduce NORMALLY DISTRIBUTED errors, it will not deal correctly with systematic errors or physical structural differences like SHC or differences in media or humidity or periodic changes. That will produce spurious results.

      This is why I said the best we have to assess CS is SST data. This also needs to be broken down to at least individual ocean basins and analysed separately.

      There are 9.1 year variations everywhere in climate yet modellers to not even have ANY account of lunar effects. They seem to assume it’s all 15 and 29.5d cycles and will all drop out with monthly averaging. Though even this ignorant assumption is not stated anywhere.

      So , no, I don’t think land is further down the curve and can be used to get two dots on a line and extrapolate. We could calculate a TCS for land that probably would be notably higher but would probably be wrong since despite small scale volatility, its long term bounded by SST. It is at the same point on the line with a different volatility.

      I found the 9.07y in cross-correlation of ACE and SST.

      https://judithcurry.com/2016/01/11/ace-in-the-hole/

      Those bumps are lunar driven, not what a lot of people seem to like to *assume* to be a solar signal. That’s why solar cycles sometimes work then go totally out of phase: it’s not solar !

      This is the sort of thing that should have been understood long ago with huge investments given to climatology yet it is not even discussed long enough to be dismissed. It just does not exist in their fairly tale world.

      Also in the Indian Ocean which lacks any significant NH coverage it is 9.3y not 9.1 , you can’t just average every out into a global mush and hope to gain understanding.

      9.05 is the frequency average of 9.3 and 8.85 so it looks like Indian Ocean lacks on of these lunar effects compared to the Pacific and Atlantic which have basins in both hemispheres. That needs to be understood. We may then get a handle on long term changes in ocean circulation and be able to remove some of the natural variability. But no one is even looking.

      At the moment any discussion of ECS is pretty pointless since it is confounded with natural variation. What about the early 20th c. rise? What about 300 y or warming since LIA. Until that is understood it is MEANINGLESS to talk of calculating ECS since it is not confounded with long term natural variations, cyclic or otherwise and systematic errors in the data and now wilful manipulation of basic datasets.

      Does that better answer your question?

      • Clive Best says:

        Net tidal forces acts horizontally to the earth’s surface and drag water over vast distances to form a tidal bulge under the moon-earth vector. The rotation of the earth then causes the ~2 tides per day. As the moon follows its 28 day orbital path so the leading and opposite bulges move N-S and vice versa in synchrony. This mixing through all ocean depths helps transfer of heat to the poles. The timing of spring tides varies with the seasons. In NH winter it is the tide opposite to the moon which is dominant. The greatest mixing occurs at about 45 degrees to the moon-earth vector and this maximum can reach 65N latitude. This maxima is strongest in North Atlantic/Arctic . However it is not only the oceans that are affected by tides. During the polar night lunar tides can become evident both in the stratosphere and upper troposphere. There is good evidence that variations in the Arctic Oscillation coincide with such maximum spring tides

        The maximum N-S extent of spring tides depends on the 18.6 precession year cycle. During a lunar standstill the maximum horizontal tidal force increases in latitude by 5 degrees. This must be the origin for such 9.3y cycles in climate

        There are over 50 papers reported over the years showing evidence that droughts in the centre of large continental land masses do indeed follow an 18.6 year cycle. A long series of studies by Robert Currie found has evidence of 18.6 year cycles in rainfall over Patagonia, India, and mid west America. 2005/2006 was a lunar standstill year and there were net swings of the AO index through absolute values of ~6 between consecutive new moons that winter.

        • climategrog says:

          Thanks Clive.

          You are correct tidal motion is principally horizontal, though we have always measured it vertically.

          The notion of tidal bulges is simplistic and in reality because of the presence of the continents it never happens like that. There are complex reflections off the coastline and even the lunar movement itself if very complex varying in both latitude and distance from the earth on three similar but different time-scales.

          As a result it takes literally hundreds of harmonics to accurately model tides at any one location and variation from one port to another is only usually possible by empirical analysis of each geographic location.

          The phase of the tides in relation to the passage of the moon overhead can be anything and drifts each day. It is not really two tides per 24h period. Some locations such as Torres Straight and Indonesia can have four tides per day.

          Interactions of these three lunar periods give rise to longer periods like 18.6 and 8.85y which will give rise to large horizontal displacements of water in and out of the tropics , for example and modulate ocean currents.

          These periods are readily found in SST data and even cyclone energy, .

          yet none of this makes it into GCMs

          Greg.

          • climategrog says:

            Here is the Indian Ocean. Notably different for having just the harmonic of 18.6y aparently lacking the 8.85y which produces the more common 9.07 resultant peak.

      • Ron Graf says:

        Greg, I agree there is a ton of complexity and confounding signals. I am confused why people living in this field are discouraged to dare attempt to decode considering “planet urgency” and all. Obviously one must make huge assumptions about lack of natural variation in centennial scale and ignore Enzo and other oscillation as noise. But if the central desert is 4x more responsive to radiative forcing why aren’t we looking where we should expect to find the clearest signal? Also, Tmax and Tmin are two separate pieces of data; why combine and blur them into Tavg? It would be like doctors instead of reporting the blood pressure of systolic and diastolic the would just average them. Then normal instead of being 120 over 80 would be just 100. If my doctor did not care to report both numbers and just averaged them I would find another doc quick.

        • erl happ says:

          I absolutely endorse this comment. Averages conceal what raw data can reveal and a global statistic is great for propaganda purposes but worse than useless for analysis. Surface temperature data needs to be arrayed by month and according to latitude and the source of natural variation will be revealed. I promise you.

          If one simply changes the temperature gradient from equator to pole the average changes. The winds do this all the time and we monitor the change in terms of what is called ‘the annular modes’ the most important mode of climate variation yet discovered.

          At any latitude (except 0-30° south) the most extreme variations of surface temperature manifest in the middle of the winter.

        • climategrog says:

          Ron, it is water vapour that controls climate ( not CO2 ! ) so studying regions with the least WV probably would not be more informative about how climate reacts to changes.

          Plus we don’t have a useful history and density of measurements in desert areas.

        • climategrog says:

          “… and ignore Enzo and other oscillation as noise”

          It’s all in the name. Name something and “oscillation” and we visualise a pendulum or a mass on a spring. This biases the reader to ASSUME that it averages to zero without ever needing to say so and thus risk being disproved, or worse having to justify what you put forward.

          Just about all natural variation gets CALLED the something or other ‘oscillation’. Thus we are led to the spurious assumption that all natural variation is long term neutral “noise” and any long term change must be due to human activity.

          See, it’s easy. No need to do any science, just call everything an oscillation and walk away. Job done. The rest is AGW.

          • Broadlands says:

            The ENSO averages to zero but do the components… El-Nino, La-Nina, El-Nada?

            Has our added CO2 (anthropogenic global warming) changed one, or both, on none? It is assumed that it has, but where is the evidence that it has? The 2015-16 El-Nino is no “stronger” than the 97-98 El-Nino in spite of a 10% increase in CO2.

          • climategrog says:

            Things like the ENSO multivariate index average to zero BY DEFINITION since they are detrended. That does not tell us anything objective about the net effect of El Nino and La Nina.

            This is all part of game of NAMING something to be zero rather than doing climate science to study its effects.

            Nino/Nina are two opposing effects by they are not two swings of the same pendulum and there is absolutely no reason to assume that the combinded effect is long term neutral .

            La Nina stores solar energy and El Nino dumps stored energy into the atmosphere ( and eventually to space ).

            It is a two stage throughput system , not an oscillation.

            Abritrarily calling it an oscillation and detrending it means that any trend it has will get attrobuted to something else. Guess what that may be…

          • Broadlands says:

            I’m lost, but trying to learn. Nino/Nina/Nada have no trends, by definition? For thousands of years? Raising weather havoc on both ends of the “pendulum” …as the jet streams “oscillate”.. or whatever it is that they do? Can you un-detrend the NOI indices?

          • climategrog says:

            You could always go back to physical measurements on what any of these “index” things are defined on

            AMO is another one. Why would you detrend SST unless you have already decided that it is a ‘trend’ plus a neutral, natural oscillation?

            They then splash around ‘removing’ the natural signals based on these detrended things and end up isolating a climate ‘trend’ which can be spuriously attrubted to AGW.

            The result is determined by the method, not by the data.

          • Broadlands says:

            It is my understanding that it is measured sea-surface temperatures (SSTs) and pressures (NAO). Obviously(?) they both have made an impact, and have been doing so before we started adding CO2 and then simply assuming that our additions must have added to that impact. If so where is the evidence for that?

          • Ron Graf says:

            I agree Just because one grants an assumption for the purpose of a test it does not mean performing the test will succeed in validating the theory when there are alternate explanations for the results. I am proposing to devise more complex tests that have few alternate explanations then the current global historical Tave trend.

            Realizing that change in moisture content confounds a desert test I would not choose Phoenix, AZ, as on of my test stations but I might choose Gila Bend, AZ.

            I would test the theory that polar amplification is a part of AGW by looking at both polar trends versus both temperate versus both tropical to prove a trend.

            If diurnal temperature range reduction is a hypothesized effect of AGW I would test for dDTR over land versus ocean. But then knowing that UHI causes dDTR I would test windy and rainy dDTR versus clear and calm, and non-changing rural dDTR versus high growth urban.

            If the data is precise enough one may be able to statistically coax a significant signal for AGW versus UHI. And, the AGW signal should be most amplified over polar land and deserts, and least over tropical seas or small islands.

  12. Broadlands says:

    Trying again… What are the temperatures for the LAND surfaces of the Northern and Southern hemispheres?

  13. erl happ says:

    Query to Ron Graf re this statement: “the AGW signal should be most amplified over polar land and deserts, and least over tropical seas or small islands.”

    Ron,
    1. What in your understanding is the mechanism for this ‘amplification” in polar regions?
    2. Should that amplification of the AGW signal in the polar regions be different at different in its degree according to the month of the year?
    3. Should the amplification be different in the Antarctic to the Arctic?
    4. Does not the Arctic Oscillation Index indicate a fundamental mode of natural climate variation affecting the movement of the air between the mid and high latitudes of the northern hemisphere and therefore surface temperature?
    5 Is the AO affecting the degree of amplification, change in the degree of amplification affecting the AO or is something else driving the AO? Could the hypothetical thing driving the AO be producing a change that is being mistaken for amplification?

    • Ron Graf says:

      Erl, my understanding is that CO2 is thought to the effective height of steady-state energy flux at the TOA and thereby increase the thickness of the blanket effect of the atmosphere. Since the poles get less incoming sunlight they have proportionately higher job of cooling the planet. So a thicker blanket has proportionately a greater impact. In addition, the poles have only about half as thick an atmosphere as do the tropics. This also increases the proportionate effect of raising the effective height.

      I believe polar amplification was predicted by William Kellog in 1979. I realize that he also predicted a lot more warming than we observed but it is still impressive that Arctic polar amplification was predicted before it was observed.

      The Antarctic may be experiencing less amplification due to proximity to the southern oceans. I have also heard a theory that the elevation of the mountains may aid in outgoing radiation.

      I read your article on ozone influence but I confess that I lack the meteorological physics base or to follow it. I would appreciate your detailed critique of whether my proposed basis for testing AGW theory would be valid.

      • Greg says:

        “The Antarctic may be experiencing less amplification due to proximity to the southern oceans.”

        There is not less amplification ( ie same warming as elsewhere ) nor is there less warming. There is NO warming in Antarctica except over the tiny proportion that is is the peninsula where is it caused by wind driven Frohm effect transferring heat from surrounding Belinghausen sea.

        Did Kellog’s flakey ideas predict that ?

        • Greg says:

          If you look at the graph I posted above:
          http://clivebest.com/blog/?p=7169#comment-10105

          you will see that SH oceans have shown warming, whereas Antarctica has not. So it is not the oceans preventing warming in Antarctica

          Kellog grossly exaggerated warming everywhere which meant he was *accidentally* not too far off in one region. He applied the same “polar amplification” to both poles which is a total failure.

          The whole “polar amplification” thing is a misnomer because it only happens at one pole.

          • Ron Graf says:

            I stand corrected. What are your thoughts on possible explanations for lack of warming at the south pole? Perhaps the greater sea ice, which might be caused by ocean bottom topography, helps cool.

            Is there less settling black carbon in the Antarctic?

          • Greg says:

            What started out as discussing the impact of taking physically unreal averages of land+sea anomalies on the global trend and their adjustments has not diverged into a discussion of the whole of climatology. I’m going to drop that here.

      • erl happ says:

        Hi Ron, Been away for a few days. Hence the late reply.

        The big swings in polar temperatures occur in the winter and not the summer. Ozone proliferates in winter due to the low angle of the sun, a dramatically reduced ionization rate for both oxygen and ozone due to the angle and the reduced length of day. There is some conjecture that cosmic rays do some ionizing at the winter pole and promote the formation of ozone, much enhanced when the stratosphere is warmer. But in any case ozone partial pressure increases dramatically in winter through to Spring.

        In point of fact, below 10 hPa there is very little ionization of oxygen. anywhere. EUV is wholly absorbed in creating the ionosphere above about 60 km in elevation. Ozone is carried down into the stratosphere and it proliferates to the extent that it escapes ionization by UVC.and UVB at 320-330 nm. There is a large expansion and contraction between day an night and there is in fact a lot of vertical movement in the lower stratosphere promoted by ‘tongues’ of low density ozone rich air that trails out equator-wards from the rapidly circulating winter stratosphere that rotates faster than the Earth and in the same west to East direction as the Earth.

        Bear in mid that the air is warmer than the icy polar surface in winter. It is the presence of ozone in the air that accounts for its warmth. That said, air containing ozone from the mesosphere is very cold.

        If there is less air descending from the mesosphere the air warms.

        The increase in the temperature of the air at the poles as the AO goes positive (increased polar pressure) is related to two influences:

        1. Reduced polar pressure and a reduced descent of very cold mesosphere air.
        2 Increased flow of warm air from the mid latitudes.

        This is the dominant mode of temperature change at the poles. It is capable of both warming and cooling. The extent of change that is wrought by this means increases with latitude.

        Last, its worth noticing that the Antarctic Polar upper atmosphere experiences it largest fluctuation in temperature in October when the ozone hole manifests and the surface pressure at 60-70° south reaches its annual minimum. Pressure is a function of the extent of polar cyclone activity. That in turn relates directly to density contrasts across the vortex with ozone rich warmer mid latitude air on one side and cold mesospheric air on the other.

      • erl happ says:

        Hi Ron,
        Where do I find your proposed basis for testing AGW theory?

  14. Greg says:

    Clive, this (2*NH+SH )/3 is odd. I’d not noticed that ( mainly because I don’t think land SAT is particuarly relevent and CRU have messed around so much in the past and the cat ate their data etc., I don’t think it can be trusted.).

    Since they have all the individual time series, I’m a little curious why need or wish to do two separate weighted averages and the do another weighted average of the two.

    I would guess that this also adds another little plus to help the data long it’s warming path. During a hiatus, every little counts !

    It would be interesting to do an area weighted average of all anomalies and see how it compares to thier (2*NH+SH )/3 weighted average of two average.

    • clivebest says:

      Greg,

      I discovered this by accident. I downloaded all the CRU station data about 6 years ago and used their PERL software for calculating the area weighted averages inside a 5×5 degree grid. They always first calculate the average for each hemisphere SH and NH. Then for CRUTEM3 they made a global average by simply taking (NH+SH)/2. These were the values that were published . Then out came CRUTEM4 and the differences were originally small which I analysed here: http://clivebest.com/blog/?p=3493

      The headline for CRUTEM4 at that time was that 2005 was warmer than 1998 whereas before it wasn’t. Hence global warming was continuing as expected. However later on I noticed that the global values had increased again and I didn’t understand it anymore because I could not reproduce their result. So I tweeted Tim Osborne for CRU.

      Clive Best ?@clivehbest 1 Jun 2015 Huntingdon, England
      @TimOsbornClim Did CRU change processing software H3 to H4? Same H4.3 station data gives difference of 0.15C for 2007. How come?

      Tim Osborn ?@TimOsbornClim 1 Jun 2015
      @clivehbest Don’t think so for HadCRUT3–>4 (i.e. land+SST). Only processing change I recall is CRUTEM4 global mean (land only) 1/2

      Tim Osborn
      ?@TimOsbornClim
      @clivehbest CRUTEM3: GL = (SH+NH)/2 but CRUTEM4: GL = (SH+2NH)/3. The 0.15C difference is between what and what? 2/2

      That was the cause of an apparent increase of 0.15C in maximum temperatures. For Hadcrut4 I assume they still use (NH+SH)/2

  15. Greg says:

    Thanks Clive,

    So if you are familiar with their perl script it should be fairly trivial to mod it to do a simple weighted average of the whole globe in one hit. I see no reason to do this in two steps with a weighting in the second step which is a fag packet estimation of the required weighting.

    It would be interesting to see how a simple area weighted average of all the data compares to their two stage averaging process.

    • Greg says:

      That’s not to say that NH/SH means are not useful in their own right. To a large extent the atmospheric and oceanic circulations are quite separate. In SST I’d even prefer to work with averages of individual basins. But for those obsessed with reducing the whole question of climate to one number : do a proper average , not a two stage hack.

      • Broadlands says:

        It should be done with LAND hemisphere averages as well, in arriving at one “global” number from two separate halves. One hemisphere where people reside should be different from the other where they reside. Is it? How much is it? Worry about the UHI effect later? Where are these LAND hemisphere numbers?

        • Greg says:

          This whole article was about LAND temps, what is you point?

          If you want the data follow the links provided by Clive at the end of his article.

          • Broadlands says:

            Simple question Greg… What are the two LAND hemisphere temperatures? Can you simply tell us what they are, two values, and where you found them? Follow any link you want by anyone you want. My point? How have they changed over time in comparison with the Sea temperatures. Presumably, the Global Land and Sea temperatures use them?

  16. Clive, have a look at Grotch’s 1987 paper, which has the graph cited by Lindzen in his Lysenko lecture on Youtube.

    Journal: Monthly Weather Review (MWR) American Meteorological Society (AMS)
    http://journals.ametsoc.org/doi/abs/10.1175/1520-0493(1987)1152.0.CO%3B2

    Grotch, S. L. (1987). Some considerations relevant to computing average hemispheric temperature anomalies. Monthly weather review, 115(7), 1305-1317.

    http://journals.ametsoc.org/doi/abs/10.1175/1520-0493(1987)1152.0.CO%3B2

    Download: http://journals.ametsoc.org/doi/pdf/10.1175/1520-0493%281987%291152.0.CO%3B2

  17. Sorry. Here is the link to the paper.

    Grotch carefully analyzed the global data for land (CRU). Up to 1980/1984, he found 0.36 degrees C increase per century for land. His graph for oceans shows no increase to 1980.

    The trend is the same from around 1880 to 1984 as from 1900-1980. Including 1851-1880 reduces the trend substantially. But his data for number of stations cast doubt on data prior to 1880, because there were so few stations.

    Grotch, S. L. (1987). Some considerations relevant to computing average hemispheric temperature anomalies. Monthly weather review, 115(7), 1305-1317.

    http://journals.ametsoc.org/doi/abs/10.1175/1520-0493(1987)1152.0.CO%3B2

  18. That doesn’t work either. Below I inserted hyphen after http. The link works by copying and removing the hyphen. But not the one after 1520.

    http-://journals.ametsoc.org/doi/abs/10.1175/1520-0493(1987)1152.0.CO%3B2

    http-://journals.ametsoc.org/doi/abs/10.1175/1520-0493%281987%29115-1305%3Ascrtca-2%2E0%2Eco%3B2.pdf

Leave a Reply