News and Opinion

On the alleged unsuitability of behavioural science for fighting COVID-19

A recent paper questioned the suitability of the whole discipline to provide evidence for policy. The claim is unjustified and unworldly.

Ijzerman et al. question whether behavioural research on COVID-19 is “suitable for making policy decisions” and conclude that policymakers should use it with “extreme care”. Since this conclusion is stated by established behavioural scientists, it is likely to be influential and to provide grounds for policymakers to ignore or downgrade behavioural research.

From my perspective, as head of a team that undertakes behavioural science for governments, including on COVID-19, the commentary is undermining and unhelpful. From an objective perspective, however, the issue is whether its conclusion derives from sound analysis.

The analysis draws an analogy to NASA’s nine "technology readiness" levels and proposes an equivalent for behavioural science. As a framework for thinking about how evidence informs policy, this is badly flawed. NASA does not have to launch a spacecraft until it is extremely confident that the technology will work. Policymakers enjoy no such luxury, especially in a crisis. Moreover, success for policymakers does not depend only on the reliability of the causal relationships underpinning the project; policy decisions are more complex. They embed priorities, values, and preferences regarding risk, uncertainty, and time. Public acceptance matters too.

These factors have been understood for decades by public administration scholars researching the relationship between evidence and policy. This large volume of scholarship is ignored by Ijzerman et al, as is specific work on behavioural evidence. This is unfortunate, because the literature overwhelmingly concludes that idealised systems for assessing evidence and converting it into replicable technologies for policy use are nonapplicable, impractical, even naïve, given the dynamic, complex context of real policymaking.

After 15 years of doing research for policy, I concur. Policy discussions rarely touch technical issues of methodological robustness, but typically focus on whether evidence can help at all: Is there is time to gather it? Do we have funding for it? Will stakeholders engage with it? Not even the best policymakers want research positioned on a nine-point evidence-readiness scale. In their environment, such niceties are not relevant. In fact, behavioural evidence supports this: weighted integration of factors is not the best strategy for tackling complex multidimensional decisions.

Consequently, it is unworldly and analytically unjustified to imply that the work of behavioural scientists who do not follow such a scheme is unsound.

This is not to downplay meta-scientific issues about statistical inference, replication and effect overestimation. There is a need for humility about what evidence does and does not tell us. These issues apply similarly to medical science. Yet it would be unthinkable to question whether medical science is, generally, unsuitable for policy decisions. The benefits are demonstrable.

Applied behavioural science is providing demonstrable benefits too, with ongoing critical appraisal. It has increased experimental pre-testing of policies, often via high-quality trials. It has helped policymakers to recognise when orthodox economic solutions, which dominate policy design, are unlikely to work. In response to COVID-19, governments’ use of behavioural evidence on handwashing and support for collective action exemplify these advances and are based on solid science.

We should think more critically before setting out to undermine such work.