Will ChatGPT get your research into New York Times?

Science communication's dilemma: Scientists must engage with the public, yet time is scarce. Can ChatGPT come to the rescue and expedite researchers’ public writing? That's the question that piqued my curiosity.
Will ChatGPT get your research into New York Times?
Like

Share this post

Choose a social network to share with, or copy the URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

"Nothing in science has any value to society if it is not communicated”, wrote Anne Roe, author of The Making of a Scientist, 1953.  I agree. I am a passionate science communicator but with increased accountability and university workloads, I struggle to give it priority. I’m not the only one. The academic women we interviewed for our book Inspirational Women in Academia, pointed out that  they and other scientists have valuable domain knowledge, yet limited time to share it. Balancing the demands of science production with science communication is a constant juggling act.

The popularity of research blogs shows that curious minds are everywhere, not just in academia. Unlocking more time for researchers to dive into science communication increases the odds of avoiding misinterpretation of evidence. Editors swoon over essays and research blogs penned by the very researchers behind the study. And scientists are constantly encouraged to write a blog or guest column. Personally, I devour Psyche, Scientific American and The Conversation blogs, making them a daily news treat.

So, when ChatGPT entered the public domain, I was intrigued by the promise of efficiency in facilitating the writing. How would it work?

For the uninitiated, with plugins like ChatPDF, it's possible to summarize published papers and generate full-fledged Op-Eds in a journalistic style. It takes two minutes and four steps:

1, Upload a PDF file of a published paper to ChatPDF.

2, Prompt it to write an OpEd based on it. For example: “Write an OpEd based on this PDF”

3, Copy into Word, check for accuracy and make edits.

4, Submit it to your favorite news outlet.

I tried the process with some of my published journal articles. A few tweaks here and there, and voilà! Gripping, jargon-free storytelling with vibrant metaphors that effortlessly unravel complex phenomena discussed in my paper. So what’s there not to like?

My first reaction was: Wow! I can say goodbye to writing blogs in the small hours. With the power of generative AI, I can now reclaim precious time for scientific production while letting it handle the task of science communication.

But the excitement didn't stop there. This technology has the potential to democratize science communication, granting all researchers the chance to summarize their studies in captivating language. For those whose first language isn't English and aren't well-versed in various writing styles, this could be an invaluable asset.

Inclusive science communication is not only about reaching diverse stakeholder groups but also including diverse research voices. With generative AI, gone is the spotlight of research celebrities and media-favored star communicators – those researchers who always get cited in media and asked for quotes and interviews, even if they know little about the topic.

If you are a newcomer with no media connections, or if, like me, you are a researcher with English as an additional language, you can have your study summarized in an engaging essay. In fact, every qualified researcher can produce a summary of their published paper for laymen audience – all they need to do is to follow four simple steps. Too good to be true?

I soon spotted the pitfalls. The more I prompted the chat, the more the allure of AI-time-saving faded away. As the bot continued generating responses, it started incorporating "facts" into my article that sounded convincing but were never part of my research. Suddenly, sentences about potential research avenues were being credited to me as if I had made those discoveries myself.

Fixing my own errors was easy enough since I knew my research inside out. But ensuring accuracy in summarizing others' work proved to be just as time-consuming, if not more, than starting from scratch. Instead of saving time, the lack of accuracy doubled my efforts. It also left me concerned: With Open AI at everyone's fingertips, will research now be summarized in captivating blogs, giving credit where it isn't due? What if it unleashes a tsunami of compelling press releases and persuasive “research” blogs? Blogs that appear research-based but actually contain sneaky, yet plausible, false information?

When generating academic essays and research summaries, generative AI churns out compelling and well-written arguments. The problem is that it adds its own thinking to the mix. The danger here is not only straight false information. The danger is that the bot always treats two sides of an argument equally. False balance, or bothsidesism, is typical of untrained journalists. Those who misrepresent the strength of scientific arguments and present issues as more balanced than the evidence actually supports. This leads to exaggerations at best, and lies at worst.

Bothsidesism undermines reliable media. Think of an argument that gives as much credit to hundreds of climate researchers as it does to one vocal climate denier. Or findings from a meta-analysis with thousands of children that get presented with equal strength as an ethnographic study with one child. Neither ChatGPT nor PDFChat are trained to evaluate the rigor and strength of scientific evidence. With a few prompts, the bot churns out convincing arguments claiming an equal balance of research evidence for and against climate change.

With Chat GPT at the helm, researchers can lighten their load in written public engagement. But because ChatGPT writes persuasive texts without the necessary context and background information, it generates new problems.

When I asked the bot to rewrite this piece in a jargon-free language, it produced a very readable piece with a neat “pros and cons” list of AI’s value for science communication. But my verdict is less black-and-white. My conclusion is that we need to urgently implement mechanisms for verified authorships of research blogs and press releases. Updating the UNESCO guide and other resources regarding ChatGPT and higher education is crucial. The guidance should be emphasized that while generative AI can enhance writing, it won't improve arguments and might generate inaccurate and unreliable information.

The genie is out of the bottle: generative AI will increase science communication because anyone can follow the four steps and generate captivating “research” blogs and powerful Op-Eds.  This generates the need for a peer-review system for research statements and summaries. The peer-reviewers will need to adhere to rubrics aligned with disciplinary consensus and scientific paper standards. In other words, while the convenience of generative AI is undeniable, it brings a fresh set of responsibilities. These responsibilities accelerate the need for a revamped approach to science communication among all faculty members.

As such, with generative AI, the science communication dilemma augmented and amplified. However, it has also revealed the intrinsic worth of crafting an Op-Ed: the personal intellectual endeavor of academics as they interpret and consolidate evidence, along with the valuable learning that occurs when they transform their work into a compelling and authentic message. Entrusting AI with the process would carry risks for both the researcher and the society. 

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in

Go to the profile of Natalia I. Kucirkova
4 months ago

I have been discussing with my students (all not native speakers) the distinction between using ChatGPT for checking grammar on a self-written text versus generating a text that they edit in their own style. The traceability with ChatGPT (as long as one uses the same account to log-in) is a good way to keep transparency but are there other ways to do this? I appreciate others sharing experience on this, thank you!

Follow the Topic

Society
Humanities and Social Sciences > Society
Philosophy of Artificial Intelligence
Humanities and Social Sciences > Philosophy > Philosophy of Science > Philosophy of Technology > Philosophy of Artificial Intelligence
Artificial Intelligence
Mathematics and Computing > Computer Science > Artificial Intelligence