At Nature Human Behaviour, we want to feature research that advances science conceptually and that strengthens and evaluates the evidence base of important scientific discoveries, but also work that could have considerable real-world impacts. Research into health behaviours is an obvious candidate and as an editorial team we’ve been making a special effort this year to visit as many meetings as we can in areas like Public Health, Epidemiology, Health Psychology and health behaviour change.
A week ago, I was fortunate to be able to travel to Galway, Ireland – the land of my forebears (ish) –to attend the 32nd annual conference of the European Health Psychology Society, on behalf of NHB. EHPS is a meeting on the more intimate scale – a few hundred delegates form a close-knit community of researchers, students and practicing health psychologists – and the beautiful setting of the NUI Galway campus combined with the warmth and openness of this community made it a real pleasure to attend.
The theme of the conference this year was “Health Psychology Across the Lifespan: uniting research policy and practice”. As themes go, that sounds pretty broad, but it characterised well the breath of talks and discussion at the meeting – namely, how can you integrate good theory, practice, engagement and evaluation to ensure that health interventions effect positive, measureable and lasting change? Several points emerged for me from the talks I attended.
There’s a proliferation of behavioural interventions out there – from health-practitioner guided programs to reduce alcohol consumption to mobile phone apps designed to improve medication adherence – but often, interventions are “under theorised”. By that we mean that it isn’t clear what underlying psychological construct an intervention might be trying target to encourage change, and so that makes it tricky to evaluate how and why it worked or failed. A talk by Rik Crutzen (Maastricht University) advocated breaking down behaviours into their basic “evolutionary learning processes”, while Robert West from UCL presented a method to specify and compare different theories of behaviour change – these methods would allow researchers and practitioners to compare different theories and decide which are most relevant in designing their new intervention. On top of that, there were talks from people like Keegan Knittle (University of Helsinki) and others on designing tools that make interventions - and the theory behind them – accessible and enactable for non-research practitioners and lay people.
Evidence synthesis is a key part of health research and so, fittingly, there were plenty of presentations of meta-analyses, systematic reviewers, meta-analyses of meta-analyses, systematic reviews of reviews and so on. I don’t mean this to sound facetious; with the proliferation of theories and interventions out there, these types of evaluations are important to be able to judge what “works”, what’s cost-effective and what gaps in understanding should be targeted. An analysis by Marijn de Bruin (University of Aberdeen) for example, showed that bariatric surgery was by far the most effective weight-loss treatment for severely obese individuals; but with a health economics approach, de Bruin also revealed a sliding scale of cost-effectiveness, where behaviour change interventions were more cost-effective across more patients than surgery, if health budgets were tighter.
Intervention fidelity was another hot topic at the meeting. If you’re going to have confidence in the outcomes of an intervention, you need to know whether it was administered and participants followed it in the intended way. Fabiana Lorencatto (UCL) and Nelli Hankonen (University of Helsinki) told us that many interventions include no or limited fidelity checks and suggested that updating behaviour change taxonomies to include optional components could be a way to balance the needs for fidelity and adaptability in behaviour change programmes. In one of the “state of the art” sessions, David French (University of Manchester) warned about the errors introduced by measurement bias (i.e. the way that you measure changes participant behaviour). There’s evidence that across randomized control trials, these biases occur and have small, but significant effects, given that the effects of interest are often themselves small on the level of the individual. His team is working on guidelines of best practice to minimize measurement bias in complex intervention RCTs.
Other interesting themes at the meeting included interventions that use self-compassion/compassion for others to reduce self-destructive behaviour; the potential danger of perfectionist concerns for personal health; and building better public health programmes through patient and public involvement. Suffice it to say that this was a very engaging meeting and, as an editor, I learned a lot more about the field. Special thanks to local organizer Gerry Molly of NUI Galway for bringing the conference to my attention. Let’s hope that the next meeting sees all this year’s exciting plans come to fruition.
Please sign in or register for FREE
If you are a registered user on Research Communities by Springer Nature, please sign in