Sahara Temperature Trends

I have been interested in analysing temperature trends measured by weather stations in the Sahara region. This is for two main  reasons. The first is that with so little humidity in the atmosphere any effects caused just by increases in CO2 concentrations should be more evident. Water vapor is the Earth’s dominant greenhouse gas and any changes in  humidity either naturally  or perhaps linked to climate feedbacks  are difficult to untangle from CO2 increases. Consequently arid places with little changes in humidity should help to reduce this uncertainty. The second reason is that the Sahara has not seen such massive human development as seen in other places. I have downloaded the HADCRU station data released in  July 2011 from the Met Office available here.  I have also been using and modifying the  PERL files kindly provided which are used to calculated the gridded data for global temperature “anomalies”.  The details of this are taken from the HadCru site below.

“Stations on land are at different elevations, and different countries estimate average monthly temperatures using different methods and formulae. To avoid biases that could result from these problems, monthly average temperatures are reduced to anomalies from the period with best coverage (1961-90). For stations to be used, an estimate of the base period average must be calculated. Because many stations do not have complete records for the 1961-90 period several methods have been developed to estimate 1961-90 averages from neighbouring records or using other sources of data. “

To identify relevant Saharan stations for the study, I selected those stations whose latitude and longitude lie between LAT[15,28] and LON[-12,30]. When doing this I noticed that the files actually use Lon positive heading west which is oposite to normal so I had to correct for this. Each station was also required to have data covering at least the period 1960 – 2000. There were 18 stations which satisfied these criteria as follows:

IN SALAH      Algeria
TAMANRASSET   Algeria
BILMA         Niger
AGADEZ        Niger
TESSALIT      Mali
KIDAL         Mali
TOMBOUCTOU    Mali
GAO           Mali
NIORO DU SAHEL Mali
NARA          Mali
HOMBORI       Mali
MENAKA        Mali
BIR MOGHREIN   Mauritania
TIDJIKJA      Mauritania
SEBHA         Libya
KUFRA         Libya
DAKHLA        Egypt
FAYALARGEAU   Chad

For each of these stations the data provide the average monthly temperature over time and the calculated “normals” and standard deviations. Using these temperature values and after subtracting the “normals”, the anomalies for each of the 18 stations are then calculated. An example “snippet” from the station file for FAYALARGEAU, Chad is shown below

Number= 647530
Name= FAYALARGEAU
Country= CHAD
Lat=   18.0
Long=  -19.1
Height= -999
Start year= 1946
End year= 2007
First Good year= 1946
Source ID= 10
Source file= Jones+Anders
Jones data to= 1977
Normals source= Data
Normals source start year= 1961
Normals source end year= 1977
Normals=  20.2  22.5  26.1  30.6  33.0  34.0  33.1  32.7  32.6  29.4  24.6  21.3
Standard deviations source= Data
Standard deviations source start year= 1946
Standard deviations source end year= 1977
Standard deviations=   1.6   1.8   1.6   1.2   1.2   0.9   0.9   1.0   1.2   1.3   1.6   1.6
Obs:
1946  21.9  21.2  26.4  31.8  35.3  35.4  34.3  33.7  34.1  31.2  27.2  22.3
1947  20.9  26.7  26.8  31.2  34.4  36.0  35.0  34.5  34.0  30.9  25.2  23.3
1948  21.4  22.8  23.7 -99.0  34.9  35.4  34.2  33.9 -99.0  28.6  23.7  19.2
1949  20.2  19.7  26.3  28.9  34.0  33.5  32.5 -99.0  31.3  28.8  24.7  19.0
.......

To summarise: The calculation of the grid of global average temperature anomalies used by Hadley-Cru is based on a per station “normal” set of monthly data values and associated standard deviations for 1961-1990.  These normal  values for each station are the monthly averages for that particular station over the period from 1961 to 1990. For the Saharan study the 18 station anomalies are all plotted together in Figure 1. There appears to be a very slight rise at recent times above the red zero line.

Fig 1: Plot of all 18 Station anomalies. The red line is the zero offset to the monthly normal values.

I next take the temperature values for each of the 18 stations and average them all together to get a single time series. This is shown in Figure 2.  together with a 50 point rolling averaged smoothing term. There is little evident change in mean temperatures. Figure 3 shows the averaged  anomalies for all 18 stations for the full time period, together with a least squares linear fit, and a clear rise in net anomaly now becomes evident. The trends between the averaged temperatures and the averaged anomalies  are significantly different. The averaged anomalies as used by the HadCru  Gridding algorithm shows an almost linear slow rise of about 1 degree C.  in temperature anomaly from 1950 to 2011. However there is no real sign of this at all in the station averaged temperatures over the same time period. Instead it is just the temperature extremes – both  positive and negative which  seem to increase slightly.

Fig 2: The Average monthly temperatures for the 18 Stations. Shown too is a 50 point smoothed average which shows little change in "average" temperature although the monthly extremes seem to increase.

Fig 3: Average of the calculated anomalies from the 18 Stations using provided monthly normals. Shown is a linear least squares fit through the data

Could there be a systematic effect in the analysis which accentuates any small increase when averaging together all the temperature anomalies ? To investigate this I calculated a new set of anomalies based on all 18 stations. The normals were calculated by averaging temperatures over all 18 stations for each month over the period 1961-1990. The “net” anomalies were then calculated by subtracting these “averaged” normals from the averaged temperatures over the full period. The result is shown in Figure 4.

Fig 4: Anomalies for the average temperatures for 18 Saharan Stations. The red line is a least squares fit through the data.

The anomaly trend is now essentially flat. What can be the difference between using the two different normalisation values ? It seems that averaging the anomalies gives a slightly different result to the anomalies of the averages (figure 5). However the conclusion is crucial as to whether temperatures have risen overall in the Sahara over the last 60 years. In the first case the conclusion is that average temperatures in the Sahara have increased since 1950 by just over 1 degree.C while  CO2 concentrations increased by about 80 ppm – in line with IPCC predictions.  In the second  case the conclusion is that there has been no significant change in temperature at all during the last 60 years. The raw monthly average temperature measurements also support little or zero net change. I have always worried about the use of “anomalies” instead of temperatures because it assumes there is one “normal” period (1961-1990) to which all other temperatures should be referenced. Could this assumption itself introduce a bias ?

Fig 5: Detailed Comparison between the two anomalies. Both are normalised to the period 1961-1990.

About Clive Best

PhD High Energy Physics Worked at CERN, Rutherford Lab, JET, JRC, OSVision
This entry was posted in Africa, Climate Change and tagged , , . Bookmark the permalink.

15 Responses to Sahara Temperature Trends

  1. P. Solar says:

    You seem to be in some confusion about a non existent body Hadley CRU. I did not want to pick this up on your post at WUWT because I thought it may be a slip up. However, I’ve now seen it several times.

    HadCRUT3 is combination of two databases: Met. Office Hadley’s SST sea data (referred to as HadSST2 and HadSST3) and the University of East Anglia’s Climate Research Unit land based dataset CRUTem3.

    Both organisations tend to host both the individual datasets and the combined hadCRUT3 dataset, but there is not a body called HadCRU, there is no “Had CRU site”.

    I hope that clarification is helpful.

    It is UEA CRU that was at the heart of the climategate scandal and their directory Prof Phil Jones that claims to have “lost” all the original temperature records and siad he would rather (illegally) delete data files than make them available to anyone who may wish to verify their work.

    As a result, responsibility for archiving the land data has now been transferred to the Hadley Centre.

    • Clive Best says:

      Yes you are right I am being sloppy with terms. The Hadley Centre is part of the UK Met Office and CRU is part of the University of East Anglia. It gets confusing because there is the HADCRUT data set combining SST and land temperatures and then there is the CRUTEM3 data set which is just Land based.

      So what I wrote above is WRONG. It is the raw data Station Data I am using together with the software provided by Hadley Centre.

      Luckily the (monthly) temperature records were not deleted. Is the raw daily data available anywhere ?

  2. P. Solar says:

    I’m trying to understand why there is this difference between the two ways of processing this data. It seems odd and may be significant. Your reworking of CRUtem3 made it more consistent with the SST data for late 19th c. , so there is at the very least something that needs to be explained here.

    However, I would suggest for any serious scientific work (and you seem serious) you use a better filter.

    Running mean should die ! It is a lousy filter and can produce some very misleading results. Unfortunately you don’t need to think to use it so it has wide appeal.

    Here is a comparison of the freq resp of rm and decent filter like gaussian. Just look at the amount of “stop band” crap that rm lets through.
    http://tinypic.com/view.php?pic=nevxon&s=5

    Now look at what can happen when you use it :

    That’s the same data treated with the two filters above, though you’d not recognise it.

    Note what happens to the 1940 peak. Notice the amount of h.f. still present , it’s not “smooth” at all. Other special features include bending of peaks (1960) and peaks that go upside down (1985).

    It would be interesting to see figure 2 with a proper filter applied.

    gaussian is barely more trouble than rm to calculate , it is just a weighted mean. If you want an awk script to do it, feel free to email me.

    • Clive Best says:

      I think a running mean is not good either. It just happened to be the easiest option with the graphing software I was using. I will look into a better analysis as you suggest. In the meantime, I have posted the data for Figure 2 and the new normals here . The first column are the anomalies and the second column are the averages. Time axis has been decimalised.

      You can also view the individual temperatures for each (any) station by using the station number in URLs like

  3. P. Solar says:

    I think you may be seeing a false problem here due to different scaling and presentation.
    The lack of grid on the graphs does not help but if I eye-ball fig 1 and ignore the bump around y2k, I estimate a rise of about 0.7C.

    Despite my mistrust of running mean I get about the same from figure 2 , thought the different scale gives the impression it is “nearly flat”, the nearly is about 0.7C.

    The fitted line in figure 3 is fairly clearly 0.7C over the full range of the data.

    The only problem is you OLS fit in figures 4 and 5. This seems to be getting pulled down near the end by a few negative excursions. One problem (feature?) with OLS is in the name “least squares” . It aims to minimise the SQUARES of the errors . This makes it sensitive to outliers.

    I think what you seen in the calculated slopes is simply a result using OLS fit.

    One thing you could do is apply a decent filter like a gaussian before doing the fit. This would average out spikes, so that their values are still counted but don’t have an undue weight in the OLS fit.

    >>
    I have always worried about the use of “anomalies” instead of temperatures because it assumes there is one “normal” period (1961-1990) to which all other temperatures should be referenced.
    >>

    There is no “assumption” that the reference period is special. It is an arbitrary reference point. Choose a different period as reference and you’ll get the same frequency analysis, trend etc.

    It’s well worth raising these questions but I don’t think the reference period can introduce a bias. (Unless you do something silly like OLS a trend 😉 )

    Try doing an FFT on what you have here and the same thing with a different base period. You’ll just see the DC component raised by the difference of the two base temperatures, the spectrum will remain identical , thus no distortion or increased slope.

    If you do get a significant difference in FFT , then there’s something to look into.

  4. P. Solar says:

    >>
    Luckily the (monthly) temperature records were not deleted. Is the raw daily data available anywhere ?
    >>
    AFAIK this is what they say they “lost”. If that is untrue, it is gross misconduct. If it is true it means their data record is little better than fiction. I cannot imagine any realm of science engineering where throwing out your source data would be acceptable.

    CRUTem3 and any dataset including or derived from it is now baseless.

    The cornerstone of science is verification. They have ensured that is not possible. As a result we are expected turn the world economy upside down, taking the derived data “on trust”.

    The content of CRU emails shows that trust is not warranted, neither in terms of their competence nor their professional ethics.

  5. P. Solar says:

    I think your choice of Saharan region is a revealing one. It may shed some light on the question of whether working on anomalies is useful and/or whether there is an error in the implementation. One significant feature of deserts is the large temperature swings on a diurnal scale. This is in large part due to the clear night skies , which comes back to your reason for choosing such a region for study.

    I’ve thought about this before and always came to the conclusion that it should be mathematically identical. This implies a processing error from what you show above.

    Consider a reduced case of this process using just the first three months of each year:

    [sourcecode]
    J_anom = Jan[i] – J_norm
    F_anom = Feb[i] – F_norm
    M_anom = Mar[i] – M_norm
    [/sourcecode]

    find the first quarter mean:

    [sourcecode]
    Q1_mean = (J_anom + F_anomJ + M_anom)/3
    = (Jan[i] + Feb[i] + Mar[i])/3 – (J_norm + F_norm + M_norm )/3
    = absolute_mean – mean_norm
    = absolute_mean – const
    [/sourcecode]

    mean_norm is a fixed constant for the entire data set and represents the arbitrary offset for the period. It should be remembered that zero references for both Celsius and Fahrenheit scales are equally arbitrary !

    Applying this result to the similarly calculated annual mean shows that the results should be identical except for a fixed offset that depends on the period chosen for calculation of the mean temperature.

    Unless I’m mistaken this means there is a methodological error if the two do not give identical results.

    >>
    These normal values for each station are the monthly averages for that particular station over the period from 1961 to 1990.
    >>

    I don’t think it’s that simple. The script you have is not what is used by CRU to create the “climatology” against which temperatures are compared.

    I think the Met Office site links to a paper Brohnan 2006 ? that details the method. It recommends reading the paper before using the data. It’s pretty fancy stuff involving “pentads” of five days and pseudo months of six pentads with August getting 7.

    However, the same general argument would seem to apply no matter how the “normal” temperatures are calculated, so long as they remain fixed.

    There are also data quality checks that remove certain records deemed to be too far out side the expected variance. Now here there may be a chink in the armour.

    I notice that one period where you have a significantly lower values are around 1992. This incidentally would seem to correspond to Pinatubo fallout. Also a spike in 2006. It may be that there data mangling is throwing out something that you are including (since you don’t throw stuff out).

    I suggest you focus on those two glitches to understand the differences.

    Once you have that I think you will find they are also throwing out data in late 19th c. were variability was higher and that may then explain your findings that you posted on WUWT.

    • Clive Best says:

      I have been reading the Brohan et al. paper. The normals were revamped around 2005 to quote :

      “Also the station normals and standard deviations were improved. The station normals (monthly averages over the normal period 1961–90) are generated from station data for this period where possible. Where there are insu?cient station data to achieve this for the period, normals were derived from WMO values [WMO, 1996] or inferred from surrounding station values [Jones et al., 1985]. For 617 stations, it was possible to replace the additional WMO normals (used in [Jones & Moberg, 2003]) with normals derived from the station data. This was made possible by relaxing the requirement to have data for four years in each of the three decades in 1961–90 (the requirement now is simply to have at least 15 years of data in this period), so reducing the number of stations using the seemingly less reliable WMO normals. As well as making the normals less uncertain (see the discussion of normal error below), these improved normals mean that the gridded ?elds of temperature anomalies are much closer to zero over the normal period than was the case for previous versions of the dataset.”

      What I did was to calculate new normals as the average temperature each month across all 18 Saharan Stations between 1961-1990. Many of the stations have a lot of missing data points so by combining them all together we get nearly full coverage. I then subtracted these averaged normals from the average temperature. The result was Figure 4.

  6. P. Solar says:

    PS I think the “harry.README” file included in the original climategate email release was personal log file of a CRU insider who was given the job of trying to understand the mess that was the code base used for the original data processing and trying to get it to run again.

    I spent about an hour reading it and had only got about 20% of the way through. I’m sure that has plenty of coverage on the net.

  7. P. Solar says:

    >>
    What I did was to calculate new normals as the average temperature each month across all 18 Saharan Stations between 1961-1990. Many of the stations have a lot of missing data points so by combining them all together we get nearly full coverage. I then subtracted these averaged normals from the average temperature. The result was Figure 4.
    >>

    So your “normal” now has a spurious variation in time and space, spanning the whole area and study period that depends on the arbitrary availability of data at various times and places.

    I’m not surprised that you are seeing greater variability in the data and a different long term trend.

    I think you would have a hard job arguing that this was in some way better than the “official” result.

  8. Clive Best says:

    The stations are spread around the saharan region with slightly different annual temperature ranges. Individual station data are rather dirty with missing years and missing months. We want to know if temperatures in the desert have risen since 1950. So we can either calculate normals at each station from 1961-1990 derive anomalies and average them together (the standard method). If we do this we see about 0.7 degrees increase. Otherwise we can average all the temperatures together then calculate the normals just on this average between 1961-1990. Then we find these “averaged” anomalies show little temperature rise.

    To see better the actual data ” rel=”nofollow”> you can see them plotted together with the average here. The lines sideways join up missing years of data at individual stations. This is why it is difficult to define normals per station. I suspect the included normals in the station files themselves may have been extrapolated from neighbours as discussed in the paper.

  9. Pingback: Water – zero or negative climate feedback | Clive Best

  10. desert1971 says:

    Hello MasterClive Best,
    I am meteorological Engineer work in Tamanrasset (south of Algeria). i started to study the trend in my sahara but with reel data.
    Can you help me ?

  11. Eric Barnes says:

    Hi Clive,
    I’ve often wondered about the effect of increased CO2 and how it effects the diurnal temperature curve. There is an Ingegrated Surface Hourly (ISH) dataset that has data by the hour. ftp://ftp.ncdc.noaa.gov/pub/data/noaa/ Have you ever considered doing an analysis on it? Coverage is a bit spotty in the Sahara in the 60’s, but even one station would be interesting to anyalyze when controlling for wind/cloud ceiling/etc.

    My own thoughts, which others have had of course, is that CO2 will speed the movement of heat up and down the air column and so moderate extremes in temperature. It is interesting that all time surface highs in the desert (Furnace Creek, Oodnadatta, South Australia, Kebili) are in the early to mid 20th century.

  12. Pingback: Wie wird der Einfluss von CO2 auf das Klima gemessen II – überhaupt nicht | Freie Abgeordnete

Leave a Reply