Take a big breath…
…because education is no stranger to the impacts of disruptive technologies.
The emergence of the microcomputer, access to the Internet, the explosion of cell phones and social media, and 1:1 laptop programs have all influenced the school experience and continue to do so.
As with most technologies, there was great excitement about their use in teaching and learning. Over time, their limitations became evident, and educators have put much effort into finding a balanced and practical approach to using these resources with students. All technologies seem to follow this pattern.
Now comes AI, and we are in the middle of seeing the same process play out (See the Gartner Hype Cycle for AI here).
There are calls throughout the education world about the threat of AI to K-12 and higher education, much of it hyperbolic. Students will cheat, teachers will use AI to write lessons and stop being instructional designers, teachers will assess students with AI, students will not learn how to think critically, students will develop an overreliance on the tool, and AI will hallucinate and make up what it doesn’t know. Anarchy and general chaos will follow. Education, as we know it, is over (see calculator).
Take a big breath about AI. It’s technology; currently, you can use it or not. No one is making anyone use ChatGPT unless they want to. Yes, people are developing AI frameworks for schools to help them think through the role that AI will play in education. That’s not necessarily bad and does not force anything on anyone; such frameworks can guide board policy development and the creation of school guidelines. Both are important and can promote wise use rather than leaving a disruptive technology ecosystem like AI open like the Wild Wild West. It's good to develop a strategic plan for how AI can be a part of the educational system while also considering what AI shouldn't be a part of.
For perspective, most of my education clients are dealing with chronic absenteeism, lack of student engagement, and finding teachers for their faculty. AI is out there, but it is really not a hot-button issue for them. Most of them are doing the daily hard work of school and grinding that out, preparing for a new school year. AI is nice and everything, and it can do some pretty cool things, but we have more pressing things to do, thank you very much.
A more realistic threat to education is student disengagement, driven by outdated instructional practices and curriculum, as well as unimaginative expectations of what education can be and how it should serve the lives of children and young adults. This issue arises from the limited worldview of schools that see the outcome of education solely as college and career readiness. The world needs fewer accountants (with apologies to accountants everywhere) and more engaged citizens grounded in empathy, compassion, curiosity, and well-being, among other human qualities.
And doubling down on those human qualities—that's the key when thinking about how AI plays out in school. Humans possess qualities that AI does not (at least not yet); all of what I have mentioned, plus an understanding of the context of real human engagement and the ability to make judgments accordingly. You need humans for those types of interactions.
It is reasonable to accept that AI is good at analytical tasks, that it can help with productivity tasks, and can perform generative tasks that can quickly use text-based prompts to create imagery, video, and music (how it does this and the ethical concerns associated with this is not the point of this post). The danger is that AI use in schools will follow the path of most 1:1 programs and that AI will simply be a new way of doing old things. Introducing AI to educators by saying that it can write emails for you or do other menial tasks and serve as a personal assistant reinforces this, rather than thoughtfully exploring the intersection of human uniqueness X AI capability and what that could mean for the experience of school.
The real question is how AI can advance the human condition. Everyone is wrestling with this. It has always been an essential question when addressing the impact of technology, and it should be asked again and it is.
What other questions will you ask, given that questions are often more important than their answers when engaging in meaningful exploration and innovation? How can we ensure AI aligns with ethical and environmental standards? How will educators help their school communities understand the impact of AI on education? How can we balance technological advancement with human values?
My suggestion? Schools can find a successful path for AI by learning first. Take the time to learn and then plan to prototype the use of AI. Develop ways within the prototype experience where educators study the application of AI to actual needs. Add the voices of teachers and students into the mix. Develop the productivity side of AI but focus on its generative capacity. Stop arguing about creativity and whether AI can make you more creative; do some exploration with Midjourney or Leonardo and make some cool stuff (like the three-handed teacher in my graphic from ideogram.ai) Create something in 25 seconds that would take hours to create in Photoshop, if ever. Explore the intersection of human capacity with the capacity of artificial intelligence. Do this before you write policies and guidelines so that people with experience with AI can write both.
AI is here to stay. As a profession, we have been here before. Education will make sense of this tool, as it did with the disruptive technologies before it. The more critical issue is focusing on creating learning experiences that resonate with students, inform their future, and provide the context for developing the skills and dispositions of human beings. If AI fits into that or even extends the capacity for that to happen, great. If not, that’s ok too. Ultimately, the priority should always be fostering a meaningful and impactful education, regardless of the tools educators use to accomplish that.