Beyond the Code: Why Feelings Are the Missing Link in AI Education

Universities are racing to teach students the cognitive side of AI — the algorithms, the code, the data. A new study from Pakistan suggests they’re forgetting the half that actually makes learning stick.
Like

Share this post

Choose a social network to share with, or copy the URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

The AI wave has hit higher education hard. From intelligent tutoring systems to personalised learning environments, universities are scrambling to catch it — mostly by teaching the mechanics: the algorithms, the code, the data pipelines. But a new study of 237 computer science undergraduates across three campuses of COMSATS University Islamabad argues that’s only half the story. The other half is affective: how students feel about AI. And, as it turns out, that half may matter more.

What is Affective AI Literacy?

The ABCD model frames AI literacy as four-dimensional: Affective (emotions and attitudes), Behavioural (usage), Cognitive (knowledge), and Digital/ethical. Most curricula fixate on the cognitive. This study zooms in on the affective — a student’s emotional and motivational readiness to engage with technology. It’s intrinsic motivation, curiosity, and self-efficacy rolled into one: the quiet internal voice that says “I can do this,” and “this is interesting to me.” The researchers wanted to know whether that inner readiness actually shifts how useful and how easy AI feels in practice.

Confidence shapes reality

Using structural equation modelling, the researchers found a robust positive link between affective AI literacy and perceived usefulness. Emotionally ready students see AI as a partner, not a hurdle — and that perception drives productivity.

The ease-of-use bridge

The more practical finding: emotional engagement also raises perceived ease of use. Fear and anxiety make tasks feel harder; self-efficacy makes them feel intuitive. A positive attitude lowers the mental tax of learning a new tool, creating a virtuous loop of reduced resistance and deeper integration.

The real secret to satisfaction

Affective literacy does nudge satisfaction directly — but the magic is mediation. Perceived ease of use acts as a bridge:

  1. Student builds affective AI literacy (confidence, motivation)
  2. Confidence makes the AI tool feel easier to use
  3. Ease of use translates into genuine satisfaction

The model explains roughly 62% of the variance in student satisfaction. Ease of use isn’t a design nicety — it’s the psychological hinge connecting attitude to outcome.

What this means for educators

Dropping AI tools into classrooms isn’t enough, especially in resource-constrained settings like Pakistan’s universities. Curricula need to reach beyond technical training into emotional design — hands-on workshops that demystify the technology, reflective assignments that humanise human–AI interaction, and teaching that makes AI feel emotionally intuitive rather than merely functional. Confidence, in short, deserves the same lesson plan as code.

The technology is artificial. The learning, still, is deeply human.


Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in

Follow the Topic

Artificial Intelligence
Mathematics and Computing > Computer Science > Artificial Intelligence
Educational Policy and Politics
Humanities and Social Sciences > Education > Educational Policy and Politics
Global South Methods and Theory
Humanities and Social Sciences > Archaeology > Post-Processual Archaeology > Global South Methods and Theory
School and Schooling
Humanities and Social Sciences > Education > School and Schooling

Related Collections

With Collections, you can get published faster and increase your visibility.

Transforming Education through Artificial Intelligence: Opportunities, Challenges, and Future Directions

Artificial Intelligence (AI) is rapidly changing the educational field by enabling personalized learning, intelligent tutoring systems, automated assessments, learning analytics, and administrative automation.

This collection invites original research, systematic reviews, and visionary perspectives on the transformative impact of AI in education. It aims to explore how AI technologies can enhance equity, inclusion, and efficiency in educational settings across different contexts, including higher education, K-12, vocational training, and lifelong learning. This collection will address technical, pedagogical, ethical, and policy aspects, fostering interdisciplinary perspectives and evidence-based insights.

This Collection supports and amplifies research related to SDG 4 and SDG 9.

Keywords: Artificial Intelligence, AI in Education, Educational Technology, Data Analytics, AI Ethics

Publishing Model: Open Access

Deadline: May 31, 2026

AI for Image and Video Analysis: Emerging Trends and Applications

The application of AI in image and video analysis has revolutionized a wide range of domains, offering more accurate and efficient visual data processing. Thanks to advances in neural networks, large-scale datasets, and computational power, AI algorithms have surpassed traditional computer vision techniques in performance. This transformation has had a profound impact on areas like healthcare (where AI aids in diagnosing diseases through medical imaging), security (with real-time video surveillance), and entertainment (enhancing video quality and enabling automated content tagging). As AI continues to evolve, new challenges emerge, including the need for explainability, handling large datasets efficiently, improving robustness in real-world environments, and addressing biases in AI models. These open questions necessitate continued research, collaboration, and discourse. The proposed Collection focuses on the intersection of artificial intelligence (AI) and image and video analysis, exploring the latest advancements, challenges, and applications in this rapidly evolving field. As AI-powered techniques such as deep learning, computer vision, and generative models mature, they are increasingly being leveraged for tasks like image classification, object detection, video segmentation, activity recognition, facial recognition, and more. These technologies are pivotal in industries including healthcare, security, autonomous vehicles, entertainment, and smart cities, to name a few. We invite researchers and practitioners to submit articles related to, but not limited to, the following topics:

- Deep learning techniques for image and video analysis

- AI-based object detection and recognition

- Image segmentation and annotation using AI

- Video classification and activity recognition

- Real-time video surveillance and security systems

- AI for medical image analysis and diagnostics

- Generative adversarial networks (GANs) for image and video generation

- AI in autonomous driving and smart transportation systems

- AI-powered multimedia search and retrieval

- Human-Computer Interaction (HCI) through AI-based video analysis

- AI techniques for image and video compression

- Ethical concerns and responsible AI in image and video analysis

This Collection supports and amplifies research related to SDG 9 and SDG 11.

Keywords: computer vision; image segmentation; object detection; video surveillance

Publishing Model: Open Access

Deadline: Sep 15, 2026