Will Fusion or Fission power the world?

Civilisation needs a long term reliable power source if it is to avoid eventual collapse. Renewable energy cannot achieve this because a) it is inherently unreliable and requires gas backup,  b) lasts less than 20 years yet requires huge areas of land/sea, and c) its renewal depends on steel, tarmac, plastics and heavy machinery.  Nuclear Energy on the other hand  is truly Zero Carbon neutral, lasts over 60 years and can if needed produce hydrogen, biofuel and recharge batteries at night.

The development of Nuclear Energy in the UK was damaged by a bad press, Chernobyl,  and Three Mile Island. The only known fatalities from these incidents were 28 deaths among Fire Fighters in Chernobyl who  extinguished the fire from the roof. Another Chernobyl style accident is impossible in modern PWRs which have an excellent safety record. The Accident at Fukushima was due to a large Tsunami that killed no-one and could have been easily been avoided if the emergency generators had been sited on the roof of the reactor building. Nuclear France has a very good safety record whereas fossil fuels and off-shore wind have far worse fatality rates. France also has the lowest carbon emissions per capita in Europe.

Nuclear Energy comes in two flavours. Fission and Fusion.

  1.  The first Fission chain reaction using natural uranium and a graphite moderator was achieved in a Chicago Squash court 83 years ago by a team led by Enrico Fermi. It was literally a “pile” of Uranium and Graphite bricks arranged to achieve a chain reaction . Luckily the power generated was low (~1watt) since there was no shielding. All the natural Uranium occurring on earth was produced from the dust from a supernova of a first generation massive  star. Modern Fission reactors use enriched Uranium fuel rods in a pressurised water reactor (PWR). France is the leader in deploying low carbon nuclear energy and it’s commercial arm EDF is building Hinkley C and runs Sizewell B Fission Reactors which maintain power for up to 2 years before refuelling.
  2. Nuclear Fusion is the fusion of light elements (mainly Hydrogen) in stars to generate heavier elements up to Iron until the star runs out of hydrogen. The largest stars then explode in a supernova producing heavier elements including Uranium. The quest to exploit fusion on earth to generate electricity began in Harwell  in the 1950s and is still ongoing today. Why is it so hard? To make Fusion work you need to heat and contain a burning plasma of hydrogen isotopes (Deuterium and Tritium) and then extract the heat energy to drive turbines and generate electricity. The  power cycle must also breed new Tritium fuel to maintain  a steady state operational cycle lasting many months of operations. Tokamaks are the leading design for future fusion reactors if they can demonstrate steady state electricity generation like Fission.  ITER  (International Thermonuclear Experimental Reactor) is being constructed in France.  However other designs have emerged in recent years such as Spherical Tokamacs and Inertial fusion. The UK is no longer a direct member of ITER and has announced plans to build a Fusion power station STEP (Spherical Tokamak for Energy Production) on the site of an old coal power station in Nottinghamshire and to be connected to the grid by 2040.
  3. Inertial  fusion implodes small DT samples with lasers or projectiles to release energy and recently the Lawrence Livermore National Laboratory achieved ignition through laser implosion of a DT  pellet with a small energy gain. A UK company First Light Fusion PLC uses extreme electromagnetic pulses to focus energy onto a DT target
  4. Commercial Fusion. An interesting development is the increase in  privately funded companies looking to try alternative approaches to the large international efforts. The potential pay-off if successful of a new energy source is immense. In the UK Tokamak Energy is developing spherical Tokamaks with High Temperature Superconductors.
  5. The design of a Tokamak Fusion Reactor is rather complicated. These currently run as a series of energy pulses. The energy produced through the fusion of atoms is absorbed as heat in the walls of the vessel. Just like a conventional power plant, a fusion power plant will use this heat to produce steam and then electricity by way of turbines and generators. Any damage to the walls or the cooling system by disruptions must be avoided.

The JET Closing ceremony was held 28th March after a successful DT campaign which beat the word record fusion energy

 

I was recently involved in an on-line discussion about Fusion prospects compared to Fission with John Carr (an ex CERN colleague) and Michel Claessens from ITER.

What is clear is that in the short term ( next 10 – 15 years) we need to build several EPR Fission reactors. There are 8 nuclear sites in the UK and so far just 2 of these sites will host EPRs (Hinkley and Sizewell) providing a baseload power of 6GW.  The UK will need at least double this value to maintain energy security.

It was just announced today (12/03/24)  that the UK  will have to build several new gas powered stations because of the reliability shortfalls in renewable energy supply!

Could Fusion eventually be the game changer ?

 

 

Posted in Energy, nuclear, renewables | 10 Comments

Global Temperatures for 2023

December saw a rise in the global average temperature anomaly to 1.27C resulting in a final annual value of 1.13C for the year 2023. This represents an increase of just under 0.3C  since 2022, the warmest year so far (as widely reported). My calculation is based on spherical triangulation and uses GHCN corrected land data and the latest HadCRUT3 sea surface temperature data. It is the same basic data as used by all groups (Berkeley, Hadley/CRU, NASA etc.)

Here is the data for the full year 2023

Annual global temperature anomalies updated for 2023

The monthly data shows just how changeable 2023 really was.

Monthly Temperature Anomalies updated for December 2023

Finally here is an animation showing the temperature distribution on the earth’s surface during December 2023

 

Temperatures during December 2023.

 
Note the continuing El Nino and very warm temperatures in USA and Canada, while Antarctica remains cooler than normal.

Posted in AGW, Climate Change, Hadley, IPCC, NASA, NOAA | 4 Comments

Whatever happened to the Hiatus in Global Warming?

The surprise news from the IPCC 5th Assessment report published in 2013 was that the earth had not warmed at all since 1998. This was the so called  Hiatus in global warming lasting some 15 years. The Physical Science Basis of working group 1 is an excellent summary of the Physics and is still relevant today.

Evolution of “Global Temperatures” after AR5 (2013)

The official temperature series used then by the IPCC for the AR5 report was based on the CRU (Climate Research Unit, University of East Anglia) station data combined with the UK MET Office Hadley Centre’s SST ocean temperatures – HadSST2. However all the other temperature series (NOAA, NASA, Hadley/CRU) basically agreed with this conclusion. The AR5 report stated: “The observed GMST has shown a much smaller increasing linear trend over the past 15 years than over the past 30 to 60 years, depending on the observation data set. The GMST trend over 1998-2012 is estimated to be around one third to one half of the trend over 1951-2012”.

There followed a Royal Society open Meeting to discuss these results, which I attended (detailed notes are here).   The Physical Science Basis of working group 1 is an excellent summary of the basic Physics and is still just as relevant today. The  Hiatus though became a major discussion topic, especially since all the AR5 climate models were predicting far greater warming than that observed. The official IPCC position at that time was that such pauses in warming were not unexpected and probably due to natural oscillations like the AMO (Atlantic Multi Decadal Oscillation).  Warming though was expected to restart soon.

Fig 1: Comparison of CMIP5 models and observed temperature trends, showing strong disagreement.

It thus became critically important for Climate Science to confirm that increasing CO2 levels does indeed cause significant warming in the temperature data. Here is the story behind how that objective was finally achieved within just  a couple of years of AR5 and the hiatus was quietly forgotten.

Evolution of HadCRUT.

The CRU land data is based on hundreds of meteorological station data collated by CRU. The climatology for each station is defined as the 12 monthly average temperatures calculated over a 30 year period from 1961-1990. These are referred to as climate “normals” and act as a baseline. The monthly difference in temperature to these normals are called “anomalies” and the global average of these anomalies is the reported monthly and annual global temperature (anomaly). So it is not the actual temperatures which count, but instead it is the change in temperature relative to the normal period which count. The Sea Surface Temperature HADSST is maintained by the Hadley Centre (MET OFFICE). Modern measurements are taken  by floating buoys and by satellites. Older measurements are derived from buckets and engine intake temperatures and have far sparser coverage. Systematic differences between old and new methods are estimated (bucket depth etc.) and corrected for.  Therefore knowledge of earlier SST is much poorer than modern SST temperatures.

Each month temperature data are being accumulated. The global temperature anomaly for HadCRUT4 in 2013 was calculated as follows. The monthly temperature anomaly for a weather station is simply the difference between that value and it’s “normal value” –  the 30 year average temperature for that month (1961-1990). Stations data were gridded in 5×5 lat, lon bins. Bins are weighted by cos(latitude) because the earth is a sphere and areas diminish as cos(latitude). Until 2013 the global value was simply (NH + SH)/2, but this was later changed to (2*NH + SH/3) because there is roughly half as much land area in the SH as there is in the NH. The original software to do this was a PERL script which looped over all station files (I still have it).  This change alone explains some of the apparent temperature increase post 1998 going from CRUTEM3 to CRUTEM4. However this is not the full story, because in addition over 600 new stations at high northern latitudes were added in CRUTEM4, while 175 stations dropped from South America  showing cooling were dropped. The net warming effect of all this was to boost the warming trend by ~30% as compared to CRUTEM3.

Jones(2012) wrote “The inclusion of much additional data from the Arctic (particularly the Russian Arctic) has led to estimates for the Northern Hemisphere (NH) being warmer by about 0.1°C for the years since 2001.”

CRUTEM4.3 then further continued this warming trend by adding yet more arctic stations and began using data homogenisation ( similar to GHCN V3 ) as discussed below.

Sea Surface temperature data is maintained by the Met Office and was originally called HadSST2 in 2013 (current version is HadSST4). These sea surface temperatures were mainly ship based measurements either by buckets sampling or by engine inlets. Floating buoys were deployed later in the 20th century. Consequently knowledge of SST is rather poor in the 19th and early 20th century and much better once buoys and satellite data were used. Corrections are therefore applied to the earlier data,much of which is inspired guesswork.

HadCRUT4 is the global temperature average calculated by simply combined all the land measurements combined with SST measurements based on the same basic algorithm as for CRUTEM4. The global average was then simply the cosine(Latitude) weighted average over land and ocean 5 degree lat-lon bins.

This then was the situations after AR5 where the Hiatus in Global warming was a clear effect as confirmed by other groups and in the AR5 report itself. Yet just a few months later the Hiatus slowly began to disappear! So how did that happen? The simple answer is that the underlying temperature measurement data were changed by applying two new “corrections”. The first correction applied an algorithm called “pair wise homogenisation” to all station temperatures. The second algorithm is called “Kriging” of the temperature data, which assumes that one can interpolate temperature anomalies into Polar regions without any recorded measurement data. We look at each below.

Pair Wise Homogenisation.

“Automated homogenisation algorithms are based on the pairwise comparison of monthly temperature series.” The pairwise homogenisation algorithm effectively always enhances any region wide warming trend. The algorithm first looks for shifts between neighbouring weather station “anomalies”. There can indeed be cases where a station has been relocated to a slightly different altitude which can causes a step function shift in average temperatures. However the algorithm is applied generally between all station pairs even, sometimes thousands of miles apart. This will produce a systematic enhanced warming effect especially when applied to the 30 year normalisation period itself (e.g. 1961-1990).

Global temperature comparison for uncorrected (4U)  and corrected data (4) – GHCN.

We see above that the net effect of “homogenisation” is to increase the apparent warming since ~2000 by about 0.15C.  How sure are we that these automated algorithmic corrections are even correct? A recent paper has looked in detail at the effects of the pairwise algorithm on GHCN-V4 and the results are surprising.  They downloaded all daily versions of GHCN-V4  over a period of  10 years providing a consistency check over time of the corrections that were applied. They studied European stations and found that on average of 100 different pairwise corrections were applied during that decade while only 3% of these  corrections actually corresponded to documented  metadata events e.g. station relocations. Only 18% of corrections applied corresponded to documented station moves within a 12 month window. The majority of “corrections” were intermittent and undocumented.

Another consideration is that a comparison of the temperature of one station with its near neighbours should occasionally identify stations reading too hot and reduce the recorded temperature accordingly. Yet this never seems to happen, the trend  always seems to be towards a warmer trend compared to that in the raw measurement data.

Here is just one  example from Australia where a warming trend has been imposed on top of a clear station shift.

Click to view an animation of the corrections applied from  stations hundreds of miles away.

Krigging (COWTAN and WAY)

Cowtan & Way introduced a version of Hadcrut4 after AR5 which used a kriging technique to extrapolate values into those parts of the world where there are no direct measurements. In particular this was used especially in the Arctic. The end result is always to increase the overall warming rate. All groups now regularly use this type of technique to extend coverage into  polar regions and .

Sea Surface Temperature Data

Sea Surface temperatures have been measured since 1850 using very different methods ranging  from bucket temperature, engine inlet temperatures, through to buoys and satellite data. It is complex since temperature varies with depth and other factors while land temperatures are measured 2m above the surface. Complicated methods to correct such instrumentation changes have been developed. The latest HadSST4 data which also  incorporates satellite corrections to recent buoy data  has also added a significant warming trend. See here for details.

The overall result of both these updates has been to increase the apparent recent warming. This can be seen by comparing the uncorrected global temperature data with the corrected data each calculated in exactly the same way.

Effect of upgrading SST from HadSST3 to HadSST4 is large

So finally the Hiatus has disappeared !

This also continues a general trend ever since AR5 of unifying all temperatures series so as to agree with each other confirming a  higher rate of warming (GHCN, Berkeley Earth, NASA, HADSST5). This trend is based on 1) Homogenisation of station measurements, 2) Infilling data to include areas without measurements (kriging)  and 3) Blending of SST to 2m temperatures.

The HADCRUT5 dataset now also uses by default infilling to extend coverage to those 5 degree bins without any data. Shown below is a “modern” comparison between all the major temperature series as produced by Zeke Hausfather for Carbon Brief.

Basically the consensus on quantitive global warming was reached simply because  everyone now uses the same  methodology (homogenisation, kriging, etc.) and the same underlying data to calculate it! There are no independent datasets. That said, our knowledge of temperatures before 1950, let alone pre-industrial temperatures, are still far more uncertain than is made apparent here. As a consequence  the vertical scale shown above has an uncertainty probably over  0.1C, simply because we don’t really know what preindustrial temperatures really were.

One of the most important checks on the validity of scientific results is to have independent groups analysing the raw temperature data using different methodologies. This process seems to stopped as climate change politics has grown. Despite all the hype,  it is still often almost impossible to see any actual warming effect on an individual station. Dublin is a good example.

Temperature trace for Dublin Airport. In red are the recorded temperatures and in green are the temperature anomalies relative to 1961-1990.

Science advances when independent experimental results confirm a theoretical  prediction. It usually never works the other way round !

Posted in Climate Change, climate science, CRU, Hadley, NASA, UK Met Office | 7 Comments