The International Clinical Trials Methodology Conference (ICTMC) is held every two years to showcase developments in clinical trial methodology being proposed and tested in the UK and further afield. This is relevant to sustainability because randomised controlled trials (RCTs) are the best way to test whether interventions are effective or not. This evidence can consequently prevent resources being wasted on interventions that don’t work. As the Programme Manager for the ISRCTN registry, which registers RCTs and other studies with human health or wellbeing outcome measures, I am keen to keep developing my expertise in clinical study design and conduct and was excited to attend the 6th ICTMC in Harrogate, UK, in October 2022. I’ve summarised some of the things I learned.
The carbon footprint of clinical trials can be measured and reduced
The Sustainable Healthcare Coalition has created a Care Pathway Carbon Calculator that can be used to estimate the carbon footprint of healthcare interventions. SHC estimates that clinical research spending is about 5% of total global healthcare system costs, which would suggest that clinical research might result in about 100 million tonnes of carbon dioxide equivalent (CO2e) emissions per year, about the same as Belgium’s.
The CiCT project is a collaboration between the Institute of Cancer Research (ICR) and Liverpool Clinical Trials Centre that is funded by the National Institute for Health and Care Research (NIHR) and aims to reduce the carbon footprint of clinical trials. CiCT intends to refine and test its methodology in order to develop a carbon footprint calculator for study investigators and also conduct a public consultation to understand the trade-offs that might be acceptable between running a study that releases carbon but might produce evidence that saves carbon.
An audit of five trials involving outpatients in Ireland calculated the carbon costs of patient and staff travel, shipments and onsite energy use. In total, the five trials were responsible for approximately 41 tonnes CO2e, equivalent to 4 years for an average Irish resident, with the main contributor being patient travel.
The carbon footprint of the NightLife study was reduced by holding most meetings and patient interviews online, staff working from home and participants providing consent electronically. Participants also used the Photovoice method, which involved them taking photographs and telling the story behind them and what they felt, to provide qualitative data on kidney failure and dialysis. Not only was 104 tonnes CO2e saved, but there were also significant financial savings, which were reinvested in researcher training, participant benefits and increased scientific communication. Patient and public involvement and engagement (PPIE) meetings were held online rather than in person, which likely enhanced their geographic and ethnic diversity.
Research waste is still disgusting
“Most randomised trials are bad and most trial participants will be in one” according to a recent commentary. Professor Shaun Treweek and colleagues looked at 1,640 trials included in 96 randomly selected Cochrane systematic reviews published between May 2020 and April 2021 and scored the risk of bias using the Cochrane tool. Only 8% were classed as having low risk of bias, with 62% having high risk and 30% uncertain. This means that 222,850 (56%) of participants were in trials with high risk of bias. Applying three different estimates of cost per participant, between GBP 700 million and GBP 8 billion was spent on these unreliable studies. The recommendations were that no trial should be funded or receive ethical approval if it is not designed with methodological and statistical expertise involved, the risk of bias should be assessed at the design stage, and that there must be more investment in methodology research and infrastructure, including training and support for methodologists and statisticians.
Journal editors and peer reviewers are not reliably checking manuscripts against reporting checklists (e.g. SPIRIT and CONSORT for RCT protocols and results articles) or checking that outcome measures are accurately reported. Checking for study registration is also not prioritised. Sharing of data, code and other materials is still patchy and the prevalence of spin (inflating the positive aspect of the results) is rising in both journal articles and abstracts. These behaviours contribute to research waste because the evidence generated is not reliable. Professor Isabelle Boutron suggested the following measures to tackle these problems:
- Checking manuscripts against reporting guidelines should be separate to peer review and could be a role that early-career researchers (ECRs) could be trained and paid by journals to do
- Reporting should be more structured, with AI used to identify and integrate reporting in different sources and formats
- Interpretation of results should be done by independent people, not the study investigators
- Comparing institutions on their transparency might result in greater recognition of the work involved in reporting completely and adhering to standards
Clinical research is being made more accessible and inclusive
The TRECA study within a trial (SWAT) (study registration, protocol) found that recruitment of children and young people into the host trial was increased by using multimedia information resources (websites where people can access different media and choose the order to view them) rather than standard text-based participant information sheets.
The UK Clinical Research Collaboration (UKCRC) registered clinical trial unit (CTU) Network’s PRincipleS for handling end-of-participation EVEnts in clinical trials REsearch (PeRSEVERE) project aims to “help ensure that the right to withdraw informed consent is put into practice in a way that protects both individual participants’ rights and the quality of research.” This means finding ways that allow individuals to modify their participation in a study without losing their data completely, thus minimising data loss. Following a public consultation on the draft, a Principles document has been published to give researchers guidance on best practice.
The RAPID-19 study team showed that it is possible to involve patients and the public in trial design when a study needs to be set up quickly in response to a pandemic. The PoINT project investigated involving the public in numerical aspects of study design. Some adaptations to trial management introduced as a result of the COVID-19 pandemic might benefit clinical research outside of pandemic situations (https://trialsjournal.biomedcentral.com/articles/10.1186/s13063-022-06834-4, the WILL study).
Surgery RCTs are complex and challenging
Guidelines for optimising clinical research in surgery have been developing since the late 1990s, when surgeons were criticised for being overly dependent on case series as a source of evidence. Formed in 2008, the IDEAL Collaboration is “an initiative to improve the quality of research in surgery and complex interventions to build up a robust evidence base about new procedures and devices.” This acknowledges that surgery is a complex intervention and is not easy to evaluate as there can be many variables. These include:
- Variability of starting points, especially for traumatic injuries, and the variability between one surgeon and another in terms of dexterity, experience and skill
- Patient and surgeon preferences. If the study aims to randomise patients to surgery or a non-surgical intervention, it can be difficult to find people who don’t have a preference that then could affect their response to the intervention. Surgeons also have their own preferences and ways of doing things that have been built through practical experience. In the TOPKAT study, which compared total and partial knee arthroplasty, if surgeons favoured one technique, they were paired with a surgeon who favoured the other so that one of the pair would operate on the patient whichever procedure they were randomised to (TOPKAT protocol).
- Variability of pre-operative and post-operative care can affect outcomes
- What should be the comparator? If the question is whether a surgery works, it needs to be compared with no treatment. If the question is whether it works and how it works, then the surgery should be compared with a placebo to test whether there is a placebo effect. To find the best treatment, the surgery could be compared with another surgery or with non-surgical management.
- How can participants and assessors be blinded to which intervention they were randomised to? In the NEON study, all participants with digital nerve injury will have surgery but will be randomised in the operating theatre to have the nerve ends sutured or just aligned. In the RACER and RACER-Hip studies comparing robot-assisted and conventional surgery, participants receiving conventional surgery had sham incisions to replicate the guides placed for the robot-assisted surgery.
- Adherence (do surgeons conduct the surgery exactly as they are meant to do in the study protocol?) and compliance (do patients do all the actions they are supposed to do before and after surgery?)
- How to conduct a statistical analysis given this variability
Again, this complexity demonstrates the need for methodological and statistical expertise and planning when designing clinical research in surgery.
Please sign in or register for FREE
If you are a registered user on Research Communities by Springer Nature, please sign in