Picture a world in flux, media splintering along ideological lines, and critical information next to impossible to come by. That depicts the landscape Weber Shandwick and the Institute for the Future surveyed back in 2020. We wanted to know how people were finding their way during lockdown, how we were deepening digital connections, and who we turned to for truth and grounding.
Among 50 trends initially uncovered in the research, one felt both inevitable and controversial: “Online cheating.” Well before ChatGPT stormed the world, the topic sparked debate with our research team.
Online cheating wasn't just a consequence of remote learning. New behaviors discovered were a window into the future and a glimpse into Generation Z—resourceful, tech-native, and unafraid to chart their own path, even if it means bending the rules.
Nuance and foresight led us to include it in the final report. This game-changer centered on a savvy, generational modus operandi and DIY ethos.
Viewed through a Gen Z lens, DIY is a hack, and through older ones, a cheat. We presented a different perspective to capture what was really going on.
Simply put, younger generations find a way to work around systems that don't work for them.
With ChatGPT in hand, students are hacking away at higher education. A year and a half past its debut, school guidance on its use remains spotty at best. Applying GPTs extends way beyond writing a term paper or essay. Using them more expansively can't be a side hustle; it must be integrated into classroom pedagogy.
Why’s an outsider like me raising the education flag?
The conflict is both professional and personal. On the job, candidates aren’t ready to work on a team integrating genAI into our thinking and work. Personally speaking, I have two kids in college, throttled by what they can and can’t do.
My daughter will soon graduate from the University of Wisconsin's journalism school. Like others in her circles, she's GPT-constrained by poor guidance on the rules and how they can be additive to her education.
For journalists, using them as tools for thinking is now possible. AI-powered agents can sift through financial records, government reports, and social media activity. They can be designed as custom feeds to surface news and open-source intelligence. And yes, they can make crafting stories more efficient. There’s a void in using computation to improve all aspects of research and reporting. As schools slow-play integrating genAI into curriculums, Rome burns.
For reference, a TikTok she sent me the other day shows the disconnect between students and professors and the potential yet to be realized.
This TikTok, "When you finished a project in 2 days instead of 2 months," has millions of likes, shares, and comments. In short-hand, the thread highlights:
…"professors checked out."
…"ChatGPT is my new best friend."
…"lit bc u use it for everything."
…"always there for me."
It makes me wonder, with ChatGPT as a flashpoint, who’s cheating who?
GenAI Will Expose Cracks in Old Systems, Modes of Thought
All teachers should brace for what's to come. According to serial entrepreneur and technologist Peter Diamandis, we’ve only seen 1% of total AI investments in the market. If this is remotely accurate, imagine disruptions on the horizon. GenAIs like Grammarly, Microsoft Co-Pilot, and Google Gemini are embedded in apps and integrated into our gear today. Soon, PCs marketed as AI computers will hit the market.
Administrators are overfocused on plagiarism and overlook how pervasive genAI will become. Telling students they can’t use GPTs is like telling them they can't use the Internet. Profs must think beyond bans and controls on how kids think. Using them to augment traditional classroom learning builds the necessary smarts to thrive in a changing world.
Who is facilitating what this looks like?
People like Douglas Rushkoff, Ethan Mollick, and Claire Wardle are stepping up to the challenge. Their research orients to application and human implications more than the mechanics of machine learning, neural nets, transformers, or natural language processing. How we will use them is where the action is.
What implications must we consider as they drive new learning behaviors? How quickly will new curriculums, applied across all departments, follow? Structurally speaking, can colleges evolve quickly enough to develop, approve, and update genAI courses?
University-wide guidance on using genAI is elusive, putting decisions into the hands of department heads or class leaders. Some, like Northwestern, frame ChatGPT guidance this way:
It’s unclear if students know and will abide by these codes. I've been walking the beat, talking to professors and students. Despite frameworks like Northwestern’s, there's a big disconnect on campus. Students are finding their own way, doing whatever works for them. And the teachers... well, there's a gap in understanding when it comes to what these things can really do.
Marshall McLuhan said a teacher's job was to save students’ time. But it's not just about getting work done faster. Kids must become masters of tools that will shape their future. They need to know what genAI’s can and can't do, their ethical use, how to apply them to tasks, and how they'll change and expand chosen fields of work.
Despite dystopian depictions of a dumbed-down generation or mass automation, consider how the Internet and social media deployment led to an explosion of new roles to be filled. We’re now forking into a new digital direction. Like previous waves, a genAI-powered world requires an incredible expanse of construction and new jobs to do the building. We need students ready to jump in.
Cheating or New Street Smarts?
The core issue plaguing progress is using genAI’s to cheat. I’ve heard from job candidates that shaming accompanies using them. The emphasis on “academic integrity” misses the bigger picture. Renowned futurist Alvin Toffler claimed decades ago, “The illiterate of the 21st century will not be those who cannot read and write, but those who cannot learn, unlearn, and relearn.” He’s now speaking to teachers and students alike.
Pretty soon, all computers will integrate genAI’s into their OS. Beyond ChatGPT, you have Perplexity guiding you through a search query. Consensus cutting through arcane research. With ChatPDF, you can distill and explore any PDF on your terms or make an image on command with agents like DaVinci.
The trouble is, when I talk about anything beyond ChatGPT, students’ eyes go dead. They can’t see a more expansive canvas. That's the crux of the problem—kids who don't know what to use, productive ways to apply them, and teachers who can't show them how.
Some places—those with the resources and foresight—have begun to map new territory. The University of Southern California invested $10 million in seed money for its Center for Generative AI and Society. The University of Michigan is offering workshops to provide professors and staff with much-needed guidance. Northwestern is spinning up genAI boot camps for a hefty $12,995.
Old Wine Poured from New Bottles Tastes Bad
The application of genAI will progress through cycles: first replicating existing learning, then augmenting it, and then enabling native experiences that redefine education altogether. Educators must focus on the latter over the former.
Envision a world where students are not passive recipients but imaginative collaborators, critical thinkers, and skilled curators of collective intelligence.
Imagine a kid glazing over, trying to figure out how a plant works. With a little AI magic, photosynthesis can be explained in any number of ways. GenAI’s can synthesize a lesson with the simplicity of a 5th-grade explanation, produce a poetic representation of a plant cell, or offer clarity through AI-generated renderings. If you know how to use them, genAI’s don’t dilute understanding. They multiply avenues to it.
In literature studies, students can become detectives, wielding genAI to generate a kaleidoscope of Shakespearean sonnets, each refracted through a different prism of voice and style. History students become analysts and arbiters of truth, using genAI to conjure fictional dispatches from the past and develop their discernment to separate fact from fiction. For writing, they become deep subject matter experts and master editors, honing their craft by evaluating and refining genAI-generated copy.
These are rudimentary snapshots of cognitive shifts that genAI makes possible. They require a transition from memory and recitation of facts to the rigorous investigation and creative application of knowledge. For educators, failure will be refusing to embrace new tools and apply them quickly in a fast-changing world.
The question isn’t whether kids will use genAI but to what end. As McLuhan might suggest, the message will be shaped by those who master this new medium.
We can’t let an educational stance that’s past its born-on date leave the next generation behind.
Perspective Agents was recognized as an Amazon bestseller and the #1 new release in computers and technology, AI and semantics, and social aspects of technology.
Thank you to those who purchased the book. If you're willing, I’d be grateful if you posted a review on Amazon. Good or bad, the feedback means a lot.
Thanks for reading and supporting my work.