Chat or Cheat? What Drives Students to Use ChatGPT?

Students who care about academic honesty tend to see ChatGPT as risky, and that’s what makes them use it less. A conversation with Drs. Silvia Ortiz-Bonnin and Joanna Blahopoulou about AI in the classroom.
Chat or Cheat? What Drives Students to Use ChatGPT?
Like

Share this post

Choose a social network to share with, or copy the URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

Explore the Research

SpringerLink
SpringerLink SpringerLink

Student perceptions of ChatGPT: benefits, costs, and attitudinal differences between users and non-users toward AI integration in higher education - Education and Information Technologies

Today, there is no doubt that Artificial Intelligence (AI) presents both opportunities and challenges in higher education. This study examines three key areas: (1) students’ use of ChatGPT, (2) their perceptions of its benefits and costs, and (3) the differences in attitudes toward AI integration in higher education between ChatGPT users and non-users. A sample of 737 undergraduate students at a Spanish university answered an online survey. The quantitative analysis revealed a high prevalence of ChatGPT use for academic and personal purposes, with students identifying its 24/7 accessibility as a major advantage, along with the time-saving benefits it offers. However, concerns were raised about potential costs, including the devaluation of university education when students rely on ChatGPT to complete assignments. Results indicate significant differences between ChatGPT users and non-users: users generally support AI integration in higher education, particularly in teaching methods, while non-users often oppose its integration, advocating for measures such as banning AI in universities. Our study provides valuable insights into student perspectives on the integration of ChatGPT in higher education, emphasizing the contrasting viewpoints of users and non-users. The findings underline the need for universities to actively involve students in shaping policies on artificial intelligence while offering targeted training to promote its responsible and ethical use. Furthermore, universities should support educators in adapting their teaching methodologies to the digital era by incorporating innovative strategies that enhance both teaching and learning.

Introduction

Artificial intelligence has entered university classrooms without a clear rulebook. Tools like ChatGPT are now part of students’ everyday learning, for example, helping them draft texts, brainstorm ideas, or understand difficult content. But their use also raises uncomfortable questions: Where do we draw the line between chatting or cheating? And what influences students’ decisions to use or avoid the use of ChatGPT in their studies? 

Does ethical awareness discourage AI use?

That’s what we initially expected — that students’ awareness of academic dishonesty — for example, viewing behaviors like cheating on an exam or submitting someone else’s work as one’s own as highly dishonest — would lead them to use ChatGPT less. But the data told a different story. Our recent study of 468 undergraduates shows no direct relationship between students’ ethical beliefs and their use of ChatGPT. But students who considered academic dishonesty a serious issue tended to perceive ChatGPT as risky. And it was that risk perception—not the ethical belief itself—that explained their lower usage and low intention to continue using the tool.

What risks do students see in ChatGPT?

The results indicate that students are concerned about being penalized for misuse, becoming too dependent on the tool or receiving low-quality or incorrect information. These perceived risks — ethical, academic and technical — played a central role in whether or not they chose to use ChatGPT.

What should universities take from this?

These results support Rational Choice Theory—the idea that students make decisions by weighing perceived risks and rewards. But they also highlight a critical insight: we won’t change behavior with rules alone. If students fail to recognize the risks associated with the uncritical use of AI tools, even strong ethical convictions may not be sufficient to prevent their misuse. That’s why helping students recognize the potential downsides of generative AI, from dependence to misinformation, may be more effective than simply telling them not to use it.

Should we ban AI tools like ChatGPT from academia?

Instead of prohibitions, universities should invest in AI literacy, clear guidelines, and open dialogue. Some steps could include: a) Defining acceptable use. Explain when and how tools like ChatGPT can be used ethically in coursework; b) Promoting critical thinking. Help students assess the reliability and limits of AI-generated content; c) Fostering a culture of integrity. Not through fear, but through reflection and shared values. If we want students to make responsible decisions, we need to create space for them to think critically and act ethically and not just follow rules.

In conclusion, ChatGPT is not inherently a threat to academic integrity. But when used without reflection or guidance, it can be. Thus, what students need is support—not just in learning how to use AI, but in understanding why, how and when they should (or shouldn’t).

Authors bio

Silvia Ortiz-Bonnin, Ph.D., is an Associate Professor of Work and Organizational Psychology at the University of the Balearic Islands (UIB). She is co-director of the Master's Degree Program in Human Resources Management at the UIB. Her key research area lies in gender and work and organizational psychology, especially in psychosocial work conditions and wellbeing in hospitality and tourism industry. And recently in artificial intelligence in higher education and employability.

Silvia Ortiz-Bonnin

Joanna Blahopoulou, Ph.D., is an Assistant    Professor    of the Department of Psychology at the University of the Balearic Islands (UIB). She received her doctorate from the Ludwig-Maximilians University (LMU) in Munich, Germany. Her key research area lies in gender and work and organizational psychology, and artificial intelligence in higher education and employability.

Joanna Blahopoulou

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in

Go to the profile of Roald Leeuwerik
about 1 month ago

Interesting reflections! I think AI can support our work, definitely not take it over or replace us. With the rapid developments in AI, we all still need to find our way. It is the right moment to think about the responsible use of AI, our own accountability, and how it can support us now and in the future.

Follow the Topic

Computational Intelligence
Technology and Engineering > Mathematical and Computational Engineering Applications > Computational Intelligence
Education Policy
Humanities and Social Sciences > Politics and International Studies > Public Policy > Education Policy
Teaching and Teacher Education
Humanities and Social Sciences > Education > Professional and Vocational Education > Teaching and Teacher Education