The release of ChatGPT in November 2022 has had serious implications for research integrity in the UK and globally. With the gradual realisation of the potential benefits of the technology, attitudes have shifted from rejection to acceptance; however, there is still insufficient clarity from academic stakeholders, including publishers, as to what constitutes appropriate use of Generative AI (GenAI). Some of the ideas discussed in this blogpost appear in a book chapter by the same author.
Academic Publishers’ Guidelines on GenAI Use for Research Writing
There exists a broad consensus amongst most academic publishers as to the basic dos and don’ts of GenAI use in research writing. In line with COPE’s recommendations, publishers agree that GenAI tools cannot be listed as authors or co-authors of scholarly outputs due to noncompliance with fundamental requirements for authorship, such as responsibility and accountability for the finished product. There is also a shared appreciation of the positive impact that the technology can have on improving surface features of written texts, i.e. grammar, vocabulary and general layout.
Notwithstanding this common ground, publishers’ guidelines reveal important nuances when it comes to other types of GenAI-assisted textual interventions. For example, Elsevier doesn’t allow anything beyond text editing, while Sage, Springer Nature, Science Journals and Taylor & Francis will accommodate uses for content generation, which they define in broad and general terms. The expectation of even the most liberal of publishers is that such uses will be openly acknowledged and described in detail by authors in order to maintain transparency and trust, with the proviso that the ultimate judgment of whether the technology has been appropriately deployed will rest with the respective in-house editors. Although not unreasonable, such an expectation is likely to cause anxiety amongst authors. In the absence of specific guidelines about how to avoid misconduct, it introduces another element of potential subjectivity and hence, risk, into the already beleaguered process of negotiating peer review feedback.
Definitions, Contradictions and the Future of Research Integrity
The trouble with GenAI is that it further compounds points of tension that have always been part of academic knowledge production. For example, where is the boundary between textual editing and meaning making? If we employ a GenAI tool to improve the language and readability of our research outputs – a use which seems universally accepted by academic publishers – will there be a point beyond which the content, creativity and intellectual contribution of our writing become affected? Changes such as supplying a missing grammatical article or the right preposition, ensuring verb-noun agreement or finding an appropriate word collocation are examples of mechanical changes that can facilitate understanding but won’t really alter the meaning of the text. However, rewriting passages involving paraphrasing or building on an author’s initial input, rearranging the sequence of points and/or suggesting new logical links, while technically remaining within the bounds of language and readability edits, will inevitably affect what the text says.
Things are further complicated if we set out to use GenAI for idea generation in the first place, as per Sage and Taylor & Francis guidelines. Even if all GenAI-produced content is verified and referenced appropriately, following the requirements of the publishers, it is not entirely clear whether such retrospectively authenticated GenAI content complies with plagiarism rules. UKRIO defines plagiarism as ‘using other people's ideas, intellectual property or work (written or otherwise) without acknowledgement or permission,’ which is exactly what any GenAI tool does, given that the datasets on which its predictive sequencing of words is based have been harvested from human authors without their consent or attribution. Retrofitting plausible references to a pastiche of other people’s words mashed together by a computer programme can neither be fully accurate or ethical.
Perhaps all these contradictions arise from an already outdated understanding of research integrity. It has been suggested that humanity is entering a new stage of its sociocultural evolution – postplagiarism – in which all texts will be coproduced by human and artificial intelligence. For such ‘hybrid’ texts, it will be impossible to decouple human from machine input and concepts like ‘intellectual property,’ ‘originality,’ and ‘research contribution,’ as we know them today, will have to be reimagined.
Author bio
Dimitar Angelov is an Assistant Professor at Coventry University’s Research Centre for Global Learning. As a specialist in academic writing and writing for publication, he has led on conceptualising and delivering innovative researcher development interventions for early, mid-career and senior researchers in the UK and internationally. Dr Angelov’s research interests focus on higher education pedagogy, as well as academic ethics and integrity in the context of Generative AI and transnational university partnerships.