Modelling and Data
If we did not know it already the
provision, analysis and representation of quality data has been shown to be
vital in understanding a complex issue. Knowing the nature, progress and impact
of the virus has depended on data gathering. It has also shown the difficulties
of interpretation. To give a simple example, the BBC regularly quotes three
death rates for Covid.[7]
Or choosing a more challenging illustration, look at Public Health England’s
struggles with understanding the disproportionate death rates in the black and
minority ethnic community[8].
Indeed, collecting more data earlier and undertaking international benchmarking
have been some of the early criticisms of the government approach.[9]
There have also been disputes about
modelling the data. To those who think that the response to the pandemic is
overblown, they complain that the modelling of the consequences of inaction was
flawed[10].
Commenting on modelling in a Cambridge
University podcast on 14th May[11],
Professor Mike Hulme emphasised their value in aiding policy but also their
limitations in not capturing all the relevant matters that are necessary for
‘good and wise decision making’. Specifically, he warned about the ‘allure’ of
models that seemed to offer accurate predictions to politicians: they are not
‘truth machines’, he said (not least because modellers
often find it hard to ‘retain critical distance from their own creations’)[12]. Just as you
should have many ‘voices’ advising your policy, so you should not rely on a
single explanatory model because it will be ‘partial’ in some way.
As Hume argues, we should acknowledge that
decision makers in the end make their own judgements, hopefully informed by
evidence but the balancing of that evidence is their preserve. And in this regard technical people, like scientists, need to
be aware in presenting their evidence that they are much more used to dealing
with uncertain and complex data than their political counterparts who wish/need
to narrow down their thinking[13].