The global market intelligence firm International Data Corporation estimated in 2024 that more data had been created in the three preceding years than in all of history before that. With this trend predicted to continue apace, aided by generative artificial intelligence tools that can automate and churn out an unlimited number of convincing falsehoods and deepfakes at unprecedented speed, K-12 educators must prepare young minds to navigate an ocean of information that didn’t exist until recently. To make such a daunting task manageable, a pair of presenters at this year’s ISTELive 25 conference in San Antonio said on Wednesday that the answer lies in qualities like critical thinking and cognitive independence that AI doesn’t have — qualities that make us human.
UNDERSTANDING THE PROBLEM
Speaking from experience as a learning manager for the ed-tech teaching nonprofit Ed3 DAO, Armine Movsisyan first summarized the problem as one of “too much information.”
“It’s not just the fact that we can be fooled by what’s online with these deepfakes and these confusing sources,” she said. “It’s the mere existence of them that empowers people to think that they can dismiss what’s actually true.”
Movsisyan added that consequences of this excess, and the channels through which it comes, include a “flattening” of the culture and the creation of echo chambers in several ways: filter bubbles allow people to choose to see only what information they want to see, confirmation bias leads them to reconfirm what they want to believe instead of questioning it, homophily (a preference for people similar to oneself) further narrows their interest in conflicting arguments and sources, and cognitive dissonance makes them uncomfortable with having their beliefs challenged, even by the truth.
In effect, people seek comfort, which creates echo chambers that exclude some reality. As generative AI gets more convincing, its ability to reinforce these chambers will increase.
Victoria Andrews, a partner at the educational design and advocacy firm Getting Smart, countered with an encouraging fact: Teens are very interested in learning how to deal with this.
“The state of the union, when it comes to AI literacy, is that young people want it!” she said. “Think about it: We’re talking about Generation Alpha and Generation Z. They were literally born with TikTok coming out of their ears. They were literally born with a phone in their face from when they came out of their mother. Their whole lives have been recorded, so they are constantly being bombarded with media. But they want the tools to be able to navigate it. They know there’s a lot of fake and false information, but they don’t know how to manage all of it.”
Referring to the growing tsunami of AI-generated nonsense as “slop,” Andrews pointed out that awareness of the problem had entered the Zeitgeist, as that was one of the finalists for the Oxford Word of the Year in 2024. Arguably, it was not entirely off the mark, as Oxford’s final choice was “brain rot.”
OVERCOMING THE SLOP
Movsisyan and Andrews agreed the first step for educators to take in bringing media literacy to students is to lead by example — to learn what it is and model it for them.
“How we consume information impacts how we’re going to share and talk to young people about how they consume information,” Andrews said.
To do this, Movsisyan offered examples of several resources, including Ed3 DAO’s own media literacy courses as well as other organizations: the National Association for Media Literacy Education, Ad Fontes Media and its media bias chart, NewseumEd, IC4ML, News Literacy Project and Media Literacy Now.
She further specified four building blocks, or educational foundations to teach students, to give them some confidence in dealing with AI:
- critical thinking skills to question assumptions, evaluate resources, recognize biases and consider alternative perspectives
- basic knowledge of AI to understand its strengths, limitations, and what it can and cannot be used for
- an awareness of their own values, beliefs, motivations, perspectives and emotional states, and those of others
- cognitive independence to use AI as a thought partner, including knowing when and how to off-ramp
Andrews and Movsisyan both emphasized that media literacy isn’t a standalone course or foundation, but all four of those, and they should be embedded in lessons in other subjects.
To accomplish this, Andrews advised the audience to teach students, and themselves, to do the following when receiving information:
- Scrutinize. Before believing content, check for biases, sourcing and possible manipulation.
- Pay attention to the limitations. Expand your perspective by using multiple sources and considering limitations of the data. Don’t fall into algorithm-driven echo chambers. Throw off the algorithm tracking you.
- Observe. Ask why the media was created. Consider whether it was meant to inform, persuade, entertain or manipulate. Identify potential hidden agendas, political groups or AI-driven content farms.
- Partner. AI is not the authority, you are. Partner with AI on all levels of Bloom’s Taxonomy, but remember that AI can never truly “understand” like humans.
Everyone is trying to find their way in a new information ecosystem, Andrews noted, but the least educators can do is contribute more to the solution than the problem.
“We don’t want to contribute to the problem, right?” she said. “We don’t want to contribute to the slop out there, so we ask you kindly, with love and care for everything we do, to pass on this message to drop the slop.”
Andrew Westrope is managing editor of the Center for Digital Education. Before that, he was a staff writer for Government Technology, and previously was a reporter and editor at community newspapers. He has a bachelor’s degree in physiology from Michigan State University and lives in Northern California.