Sweden

Everyone is watching Sweden where the chief epidemiologist Anders Tegnell has resisted lockdown. Cafes, schools and restaurants have all remained open throughout. Their policy of voluntary social distancing measures while protecting the elderly seems to be working.

I ran Ferguson’s model for Sweden after finally working out how he normalises to different populations. Basically he normalises model death predictions to those actually recorded for a specific date. For the UK this date is April 10 (10,000 deaths). This is how the model compares to Sweden if I use the same date (400 deaths) .

ICL model compared to deaths in Sweden. The blue curve is a UK style lockdown beginning 16th March ( a week earlier than UK). The red curve is unmitigated deaths. The green curve are recorded deaths.

Accumulated deaths appear to be about 1000 higher than they would have been had they applied a UK style lockdown. However the UK figures also show an overshoot of about 5000 deaths.

Different ICL lockdown timing simulations. The green and cyan graphs follow what actually occurred.

So in general Sweden and the UK are in a similar state currently. However the Swedish trend is showing a smaller decline implying that R is around or slightly above 1. If the goal in Sweden is to reach herd immunity while protecting the elderly, then it seems to be working. Everything will depend on whether a vaccine becomes available in September. If not then Sweden’s strategy could well pay off.

Posted in Public Health | Tagged | 11 Comments

Did the UK lockdown too late ?

I have been running  the Imperial College Covid-19 model for different scenarios. The published  parameter files (on GITHUB) use an R0 value of 3.0 with an IFR of ~1%, higher than that published in February (2.4). This predicts a total number  of 620,000 deaths in the UK if the disease were allowed to run its course through to “herd immunity” without any mitigation. These are the figures that SAGE must have been  considering in early March, when the UK policy then was to only test arrivals from China and flatten infections  towards an eventual  herd immunity.  On the 13th March however the advice was suddenly changed:

The new advice issued by the Chief Medical Officer is as follows:
Stay at home for 7 days if you have either:

  • a high temperature
  • a new continuous cough

Do not go to a GP surgery, pharmacy or hospital.

Something dramatic changed very quickly and by March 23rd a full lockdown was imposed – Places (schools, shops, restaurants, pubs, businesses ) were closed, travel restricted  and social distancing measures applied. The new slogan was “stay at home, protect the NHS, save lives”.

I have used Neil Ferguson’s (ICL) model to investigate exactly why this sudden policy change occurred. It seems that SAGE had by then concluded that the current R value  was ~3.0 and infections exploding mainly in London. ICL’s  model was now predicting 620,000 deaths without government intervention. This was way too much to “take it on the chin” by “slowing the curve”. But what would have happened if the decision for lockdown had instead been taken  a week earlier or a week later ? This is what the ICL model says.

 

Different ICL lockdown timing simulations

Figure 1. How the timing of lockdown measures affected final UK death rates. Note also the  strange effect that the models and data all coincide on day 100 with 10,000 deaths (10th April). This appears to be because Ferguson forces the model to agree with the actual data  on this day.

It would have been far worse to delay the decision a week than to advance it a week! Hindsight is a wonderful luxury for all armchair critics. Fergusson’s model  predicts that if the lockdown had been imposed a week earlier, then it might have saved up to 15000 lives  in the short term. However if it had instead been applied a week afterwards it may instead have cost an extra 40,000 deaths !

The IC model is driven by various parameter files which are very obtuse and difficult to fathom out without any proper documentation. So instead I simply delayed all  intervention start  dates by ~ 1 week to get these results. However this results in the day 100 effect as described above. ATTP has an ad hoc  fix for this but I am not sure if this doesn’t perhaps also bias the result.

 

Posted in Public Health | Tagged | 10 Comments

Imperial College simulation code for COVID-19

I finally managed to get Neil Ferguson’s COVID-19 simulation running on my iMAC (4.2 GHz Quad-Core i7 with 16MB of memory). So it is untrue that you need some parallel supercomputer to run these simulations. The trick though is to first to switch off the  default “parallelism” option in the make file before compiling the software. I suspect that Ferguson’s original c-code has recently been repackaged in C++ , along with a new  directory structure all cleaned up for the release on GITHUB. The famous but now shorter 5000 line routine though is still there.

Having looked into the software a bit,  I would describe his model as a Monte Carlo simulation of  how an infectious disease evolves within the population of any country. A country (such as the UK) is divided  into administrative units (regions) with population distributions parameterised by town size, age profiles, household sizes, on  a detailed population grid. It is certainly not a simple SIR model – like mine.

The disease (Covid) is seeded in some location (London ?)  with an initial 100 infected persons. The model then simulates the transmission rates within and across different locations (eg. schools, workplaces, home, transport, pubs etc.), types of person (age), and between towns (transport). As more people get infected so the number needing hospital treatment, ICU beds and eventually deaths is calculated with time. I think in the original paper the results were based on initial estimates of R and IFR data coming from China.

Different intervention scenarios aimed at curtailing the epidemic can then also  be simulated, for example by “deducing” their effect on reducing the demand on NHS ICU beds  and on the overall infection and death rates.

The guts of the simulation seems to be all in one file CovidSim.cpp which was originally written by Neil Fergusson in C, but during the cleanup process transferred to C++.  There have recently been some scathing code reviews in the media, but I am not going to follow suit and criticise his code just for the hell of it. Yes it should have been structured better, but I am also pretty sure it is correct and has been evolving  over a long period of time simulating ever more complex scenarios.  When scientists write their own code they just want results and  probably never consider the necessity for it all to be made publicly available. Now Ferguson  finds himself in a global maelstrom so basically he  had to publish his code, but beforehand it had to be at least partly sanitised.

Some reviewers have claimed that it has bugs because  the results are “non deterministic” or”stochastic”. However in his defence,  I would say that is exactly what Monte Carlo simulations always do. You can never get exactly the same result twice. Furthermore it is pretty obviously to me that the code is still under active development right now, since  it  is being added to in real time, for example  to simulate UK’s new “test, track and trace” policy.

It also looks like another team in his group at IC have been working on the Github release because in order to run the Covid scenarios you now also need to have both Python and R installed. ( see here ). Python is used to generate the sample data and R is used to plot the results. Python alone would have been more than sufficient. As a result it is not going to be easy for the man in the street to get this software up and running. Luckily I had Python and R already installed.

Having struggled through the software build, I finally managed to run their sample COVID simulations provided for the UK which has two scenarios a) No intervention and b) Full Lockdown. It is run with an effective R = 3, which seems rather high to me ( R was 2.4 in Ferguson’s original paper). It runs on my Mac in about 50 mins. So here are the plots generated by their R-scripts (not really to my taste).

Over 600,000 deaths in the UK with no intervention (R=3, IFR = 1%) and about 40,000 deaths with Lockdown intervention.

Deaths/week peak at ~1500 in mid-April with lockdown measures compared to a peak of 20,000/week with no intervention!

Finally these are the predicted deaths per week under lockdown measures (the no-intervention case reaches ~130K for last week in April).

Deaths per week and by age group under lockdown intervention

So how has this worked out in the UK? These figures can now be compared to the actual data which is probably best represented by the independent ONS study of excess weekly deaths.

ONS data on excess deaths in the UK in 2020

If you look at the bottom red curve which shows  excess deaths directly related to COVID-19 then you can see that the lockdown predictions more or less agree with the outcomes. Does that mean that  Neil Ferguson really did call the right shot?

Probably  he did, yet still today we have a fundamental lack of knowledge of the real-time  numbers of infected and recovered persons in the UK.  This current obsession with R is actually misleading government policy because R will naturally change during any epidemic. R will always reach 1 in coincidence with the peak in cases and then fall rapidly towards zero.

At some point the costs of continuous stop-go lockdowns based on fear alone will become unbearable. We will have to learn to live with this virus.

Posted in Public Health | 32 Comments