Behind the Book | Modelling pandemics

How do scientific models help us to make decisions?
Published in Sustainability
Behind the Book | Modelling pandemics
Like

Share this post

Choose a social network to share with, or copy the URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

Much has been made of the fact that governments are “following the science” when it comes to their reaction to the COVID-19 pandemic. Scientific models play a central role in this science: epidemiologists construct, investigate, and reason with models in order to draw conclusions about the spread of COVID-19, and the effect that different intervention policies would have. But how do scientific models play this role? How can models inform us about a selected aspect or part of the world? How do scientists investigate a model? And what is a model anyway? In our Modelling Nature: An Opinionated Introduction to Scientific Representation we outline these questions, and how they are related to each other, as well as assessing the different ways that philosophers have tried to address them.  

Of central importance is the functional role that models play. Models are not objects of a special kind; they are objects that are used in a special way. Models inform us about the world because they are used to represent. Just as one can use a map to learn about the territory it represents, one can use a model to learn about what it represents. This invites the question: how do models represent?

Representation and Keys

With a map we agree that certain coloured lines stand for certain kinds of roads, that certain symbols stand for certain kinds of buildings, and that left-to-right on the map stands for west-to-east in the territory. Maps come with keys specifying these correspondences, and we have to learn how to read a map in order to understand it. The same applies to models. In order to use a model to represent, scientists have to decide which aspects of their models are supposed to correspond to which aspects of the world that they are interested in. They have to specify a key that takes the exemplified features of models to features that scientists are willing to attribute to the target.

But a model rarely, if ever, portrays its target exactly as it is. Models aren’t mirror images! So this specification requires that scientists be selective about which features their models exemplify (and which are artefacts of model-land and therefore aren’t supposed to correspond to anything in the world), and how they relate to the, possibly different, features that they stand for. Keys can be based on similarities, but they can also contain limit relations, tolerance thresholds, and conventional associations.

Resisting the idea that models are mirror images of their targets, and requiring that we specify the keys that accompany them, has important implications when it comes to how they are used to guide policy decisions. For some epidemiological models (like the “Susceptible, Infected, Recovered" (SIR) model), keys export important, but imprecise, features concerning, for example, the general shape of how cases numbers will grow and the importance of threshold concepts like the reproductive number. However, because these models are relatively course, these features lack precision. In contrast, Imperial College London’s model is a more fine-grained representation of how the disease would spread across Great Britain and the United States of America under various intervention scenarios, and includes more local information.

Scientific Models are not Reality

However, the fact remains that the Imperial model is a model, and should not be mistaken for reality itself. As such, we need to specify which features are exemplified and the key that takes these features to those to be imputed to the target. In this case, what the model exemplifies requires understanding that each intervention scenario is typically realised in multiple “runs” of the model, so rather than the model exemplifying a single number for things like ICU cases, or COVID-19 deaths, it exemplifies ranges of values for these figures. These ranges are also typically accompanied by confidence judgements like “95% of the model runs under this scenario had results in this range”. In turn, the key that accompanies the model may lower the confidence values, or increase how wide these ranges are, in order to accommodate the ways in which we know that the model diverges from the target. As a result, what the model tells us about the target is less precise than it may originally appear.

This makes decision-making on the basis of the model more difficult. But uncertainty and imprecision are facts of life when it comes to (much of) model-based science, and rather than hiding them with the artificial precision of model-land, when it comes to interpreting what models tell us about the actual world, it’s best to be as transparent as possible about their limitations. Thinking through the ways in which models represent their targets provides us with the conceptual framework in which to come to grips with these issues.

About the authors

In their forthcoming book, Modelling Nature: An Opinionated Introduction to Scientific Representation, philosophers Roman Frigg and James Nguyen assess the strengths and weaknesses of existing accounts of how scientific models represent. On the basis of this analysis they present the features to evaluate a scientific model: denotation, exemplification, keying-up, and imputation (The DEKI account).

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in