Centering Students in Learning About- and With- AI

Perspectives on AI and education with Drs. Eric Klopfer and Christina Bosch from MIT
Centering Students in Learning About- and With- AI
Like

Share this post

Choose a social network to share with, or copy the URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

A century ago, schools responded to industrialization with compulsory education that looked like rows of single desks and passive learning. Decades later, Seymour Papert warned that computers in education risked replicating, rather than subverting, the limitations of such transmission-based instruction. Another few decades later, and the advent of AI again puts us at a crossroads about what learning should look like.

Education was especially unprepared for the arrival of generative AI, which immediately made many traditional tasks and assignments obsolete. Responses have included perspectives such as refuting the inevitability of AI in education and replacing teachers with AI. Some advocate for AI tutors to personalize the same content we’ve taught for decades, which might improve traditional outcomes. Others see that AI poses risks and opportunities that demand a more fundamental shift in both what and how students learn. We could hang on to our old assignments by sequestering students with bluebooks. Or we could just allow students to go wild with AI and use it for everything with reckless abandon, as one AI industry leader suggested in a recent talk here on our campus.

Clearly, there are a few different schools of thought about AI in education – but many of these folks are not actually deeply engaged in, well, schools.

We argue that today’s educational response to AI should be dictated not by what the technology makes possible, but by what experienced educators and education leaders know we need in a rapidly changing digital landscape. We envision the future of learning as amplifying the features that make us human – like creativity, socialization, and problem-solving.

If this sounds obvious, consider that a recent survey of Gen Z finds that young people overwhelmingly learn about AI from social networks and news feeds. Only 14% of respondents listed educators as a source of information about AI. Given that demystifying AI may reduce its appeal, it’s unlikely that kids will learn to open up the black box of machine learning or expose the Wizard of Oz in neural networks through a continued reliance solely on corporate AI platforms and tools that mirror the kind of individual passive learning of the 1920s.

If young people are going to develop the agency required to think about, design and produce novel contributions using AI rather than just consume and reproduce existing knowledge through AI technologies, educational contexts must lead with deliberate pedagogy. This means a coherent method for equipping learners with both the humanistic skills to think through problems carefully and the technical skills to know when and how to apply tools effectively for the betterment of themselves and humanity. Implementing such a pedagogy requires schools that empower educators to facilitate both students learning about AI strengths and weaknesses, and students learning about themselves as learners.

We can inform our strategic approach to AI and learning through existing models of learning.  Constructionist pedagogy builds on constructivism, a widely-adopted theory of learning, through two principles that are key in this age of AI education. First, students actively work on building real-world, tangible products, producing things that they’ve made. Second, students do this in a social environment where they learn with and from others, and importantly, share their work as a part of that community.

What is this like in practice? Picture the pixelly turtle from Logo, the first programming language for kids, and the testbed Papert used to develop constructionism in real classrooms, where students could easily design and create their own computer programs. Today, the Responsible AI for Social Empowerment and Education (RAISE) initiative at MIT has developed curricula and tools following the same approach.

As an example, modules about AI literacy and AI application are freely available through the Day of AI curriculum in 7 different languages. Over 2.2 million users have chosen to immerse themselves in creative making with AI tools for as little as a half hour or several sessions. One teacher said that the curriculum showed students “they can become programmers and creators of this technology and not just passive users of it.”

Or consider the Responsible AI for Computational Action (RAICA) curriculum, which provides accessible instructional materials and digital tools that even novice AI teachers can use to guide all learners in making AI-enabled projects designed from scratch (quite literally, Scratch with AI extensions in the RAISE Playground). With RAICA, one classroom created an “AI Bill of Rights” and coupled their AI models with Lego robotics to prototype how oceanic research vessels might respond to cloud formations indicative of weather changes. Other projects have included image classification applications that help foragers avoid poisonous mushrooms, or students in the school cafeteria better sort recyclables from trash. Our data shows learners value the opportunity to create original AI projects for others, with others.

 Critically, all of this work has been done in partnership with teachers, students and education leaders around the world.  We know that even well-intentioned curricula and tools like these will only have an impact if we continue to co-design with communities of educators, and consider the learning experiences of effective teachers as much as we consider the student-facing materials. Many algorithm-enabled platforms offer opportunities for creative expression, and “AI tutors” support the delivery of information through Socratic questioning and formative assessment. These tools can and should have a place in the future of learning. However, in most current designs, the technology rather than the humans tends to determine the goals, steer the interests, and provide the canvas as well as the palette for self-expression. We need the reverse.

The stakes, as Papert foresaw, have to do with driving or resisting the change that we know is coming. We can either let AI literacy in the public at large be shaped by passive consumption dictated by technology, or we can build learning environments where students learn to build our digital futures. Communities of learning that take a constructionist approach will provide a path forward where teachers shape the future of learning and students have the experiences and technical skills to shape the future for humanity. 

___

Dr. Eric Klopfer is Professor and Director of the Scheller Teacher Education Program and The Education Arcade at MIT.  He is also co-PI of MIT's RAISE initiative in AI education.  His research focuses on technology and pedagogy for building understanding of science, technology, engineering and mathematics (STEM) and systems.  He has special interests in games, simulations and computing as pathways to STEM learning.  He is the co-author of the books, "Adventures in Modeling", "The More We Know, and “Resonant Games”, as well as author of "Augmented Learning.” His lab has produced software (from casual mobile games to MMOs to AR/VR) and platforms (including StarLogo Nova and Taleblazer) used by millions of people, as well as online courses that have reached hundreds of thousands. Klopfer is also the co-founder and past President of the non-profit Learning Games Network.

Dr. Christina Anderson Bosch is a research scientist on the RAICA project. Over the last 15 years, she has pursued a range of experiences with/in education systems in various U.S. contexts and internationally, focusing on evidence-based instruction, inclusive curriculum design and evaluation, teacher professional development, and partnerships that advance access, equity, and interest in life-long learning. Dr. Bosch’s various lines of work share creativity, rigor, and global citizenship as values. She holds a Ph.D. in Special Education from the University of Massachusetts Amherst, a M.Ed. in Mind, Brain, and Education from Harvard University, a M.A. in Special Education from American University, and a B.A. in English from the University of Vermont and is grateful for the many lineages that have shaped her perspective.

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in

Follow the Topic

Communications Engineering, Networks
Technology and Engineering > Electrical and Electronic Engineering > Communications Engineering, Networks
Educational Policy and Politics
Humanities and Social Sciences > Education > Educational Policy and Politics
Innovation and Technology Management
Humanities and Social Sciences > Business and Management > Innovation and Technology Management
Artificial Intelligence
Mathematics and Computing > Computer Science > Artificial Intelligence