Chat or Cheat? What Drives Students to Use ChatGPT?
Published in Education, Mathematical & Computational Engineering Applications, and Law, Politics & International Studies
Introduction
Artificial intelligence has entered university classrooms without a clear rulebook. Tools like ChatGPT are now part of students’ everyday learning, for example, helping them draft texts, brainstorm ideas, or understand difficult content. But their use also raises uncomfortable questions: Where do we draw the line between chatting or cheating? And what influences students’ decisions to use or avoid the use of ChatGPT in their studies?
Does ethical awareness discourage AI use?
That’s what we initially expected — that students’ awareness of academic dishonesty — for example, viewing behaviors like cheating on an exam or submitting someone else’s work as one’s own as highly dishonest — would lead them to use ChatGPT less. But the data told a different story. Our recent study of 468 undergraduates shows no direct relationship between students’ ethical beliefs and their use of ChatGPT. But students who considered academic dishonesty a serious issue tended to perceive ChatGPT as risky. And it was that risk perception—not the ethical belief itself—that explained their lower usage and low intention to continue using the tool.
What risks do students see in ChatGPT?
The results indicate that students are concerned about being penalized for misuse, becoming too dependent on the tool or receiving low-quality or incorrect information. These perceived risks — ethical, academic and technical — played a central role in whether or not they chose to use ChatGPT.
What should universities take from this?
These results support Rational Choice Theory—the idea that students make decisions by weighing perceived risks and rewards. But they also highlight a critical insight: we won’t change behavior with rules alone. If students fail to recognize the risks associated with the uncritical use of AI tools, even strong ethical convictions may not be sufficient to prevent their misuse. That’s why helping students recognize the potential downsides of generative AI, from dependence to misinformation, may be more effective than simply telling them not to use it.
Should we ban AI tools like ChatGPT from academia?
Instead of prohibitions, universities should invest in AI literacy, clear guidelines, and open dialogue. Some steps could include: a) Defining acceptable use. Explain when and how tools like ChatGPT can be used ethically in coursework; b) Promoting critical thinking. Help students assess the reliability and limits of AI-generated content; c) Fostering a culture of integrity. Not through fear, but through reflection and shared values. If we want students to make responsible decisions, we need to create space for them to think critically and act ethically and not just follow rules.
In conclusion, ChatGPT is not inherently a threat to academic integrity. But when used without reflection or guidance, it can be. Thus, what students need is support—not just in learning how to use AI, but in understanding why, how and when they should (or shouldn’t).
Authors bio
Silvia Ortiz-Bonnin, Ph.D., is an Associate Professor of Work and Organizational Psychology at the University of the Balearic Islands (UIB). She is co-director of the Master's Degree Program in Human Resources Management at the UIB. Her key research area lies in gender and work and organizational psychology, especially in psychosocial work conditions and wellbeing in hospitality and tourism industry. And recently in artificial intelligence in higher education and employability.
Joanna Blahopoulou, Ph.D., is an Assistant Professor of the Department of Psychology at the University of the Balearic Islands (UIB). She received her doctorate from the Ludwig-Maximilians University (LMU) in Munich, Germany. Her key research area lies in gender and work and organizational psychology, and artificial intelligence in higher education and employability.
Follow the Topic
-
Education and Information Technologies
This is the official journal of the IFIP Technical Committee on Education and it covers the complex relationships between information and communication technologies and education.
Please sign in or register for FREE
If you are a registered user on Research Communities by Springer Nature, please sign in
Interesting reflections! I think AI can support our work, definitely not take it over or replace us. With the rapid developments in AI, we all still need to find our way. It is the right moment to think about the responsible use of AI, our own accountability, and how it can support us now and in the future.