Do you remember the headline in The Guardian newspaper from 18 December 2021: “UK scientists: Bring in new curbs now or face up to 2 million daily infections”? And whatever happened to the scenario of “6,000 COVID deaths per day”?
Reality turned out to be far removed from these ‘scenarios’ emanating from Britain’s leading epidemiologists. Plan B had already started in the week following 8 December – 10 days before The Guardian’s threatening headline – so what “new curbs” were scientists deeming necessary to avoid up to 2 million daily infections? ‘COVID deaths’ are currently running at around 250 per day, in fact around 30 per cent fewer than this if one recognises the difference between dying ‘from’ COVID and dying ‘with’ COVID.
There are many things one could say about the ‘reality gap’ that epidemiological models – and promotional media outlets like The Guardian – have created around COVID.
One could say that it shows how (certain) media platforms prefer headlines with dramatic numbers and how they seek to amplify the perception of COVID risks for their followers. This follows the well-established theory in social psychology called the Social Amplification of Risk Framework.
One could also perhaps say that it shows that scientists’ model-based predictions of virus risk are wrong. This would be a little disingenuous, however, since most epidemiological modellers will point out the uncertainty of the assumptions that they make in their modelling and also that they talk about scenarios (possible outcomes, given certain assumptions) rather than unconditional predictions.
But even if one parses the uncertainty and incompleteness of scientists’ knowledge and models, one could nevertheless point to the erosion of public trust that the appearance of such a ‘reality gap’ generates. When media reporting of COVID risks is so wildly out of alignment with reality, publics are entitled to question not just the motives of the journalists involved, but also the way in which modellers justify and communicate their results to media and politicians.
Publics are also entitled to ask questions about whether the medical community in the UK are in the grip of ‘pessimism bias’ or even covert racism—when, for example, ignoring early evidence from South African scientists in early December that the Omicron variant was substantially less dangerous than the Delta variant. As it indeed has been.
The above observations are all true and important. But the additional point that I emphasise here is one that I have written about several times before, both with respect to the pandemic and to managing climate change risk. Put simply it is this: ‘Do not govern risk using models’.
In a very important Commentary published in Nature in June 2020, ‘Five ways to ensure that models serve society: a manifesto’, Andrea Saltelli and colleagues observed that “Mathematical models are a great way to explore questions. They are also a dangerous way to assert answers.” Too often over the past two years using epidemiological models to ‘assert answers’ is exactly what has happened.
As the British statistician George Box is often quoted as saying: “Essentially, all models are wrong, but some are useful”. We should not expect models to give correct answers and it is dangerous to think they do or that they can. But in his 1987 book written with Norman Draper Box went on to pose a more important question, the practical question of “How wrong do [models] have to be to be not useful.”
What we have seen repeatedly in the case of epidemiological models of COVID, and how they have been promoted by media and used by political interests, is that when models are persistently and egregiously wrong, their predictions need handling with extreme caution.
In the Nature Commentary referred to above, Saltelli went on to say, “Asking models for certainty or consensus is more a sign of the difficulties in making controversial decisions than it is a solution, and can invite ritualistic use of quantification.”
‘Ritualistic use of quantification’ is what we are seeing happen with both COVID and climate change risk management. The dangers of trusting in numbers is what Ted Porter warned against in his classic 1995 book ‘Trust in Numbers: The Pursuit of Objectivity in Science and Public Life’.
If models—whether climate models or epidemiological models–are used to assert answers to policy choices, or used to substitute for balanced political judgement, they will mis-direct public policy in dangerous ways.
Mike Hulme, 22 January 2022