ChatGPT is Quietly Rewiring Your Brain
That's not necessarily a bad thing — if you know what to watch for
Soon after ChatGPT launched in 2022, I asked it to summarize a policy paper. The model quietly prioritized some arguments, downplayed others, and subtly conveyed a tone of clinical neutrality. I walked away with a filtered version of an AI’s perspective, unaware of what had been excluded from the document.
This experience comes to mind whenever AI's effects get reduced to simple questions of laziness or intelligence. That framing misses something fundamental: these aren't just smart tools doing our work for us.
In my book, I defined AIs as Perspective Agents—algorithmic systems that control what information reaches us, shape how we understand it, and ultimately construct our reality.
We refer to GPTs as ‘artificial intelligence’ or ‘large language models’, but those terms don't fully capture what the systems actually do.
They don't just retrieve information; they frame it. They influence what we notice, what we overlook, and what we perceive as reasonable or accurate. They're invisible editors of thought rather than neutral calculators of data.
AIs are quietly reshaping human understanding across domains—from medical diagnoses to legal precedents to investment strategies.
New research shows these changes aren't just practical adjustments—they're neurological. Understanding how AI changes our brains will determine whether it becomes a cognitive partner or a cognitive replacement.
The Rewiring Is Real
Nataliya Kos’myna, a research scientist at MIT, recently published a widely reported experiment. Her team asked people to write essays while monitoring their brain activity with EEG sensors. Some participants worked alone, others used Google, and a third group relied on ChatGPT.
The results revealed how ChatGPT measurably reshapes thinking. When participants used AI assistants, their brains showed significantly less activity in regions associated with deep reasoning, memory formation, and sustained attention.
The AI convinced them to take a break from the human job of synthesis and judgment. Those using ChatGPT couldn’t remember or accurately quote from work they had supposedly just created. They felt less ownership of their ideas.
The agent hadn't just helped them think; it had substituted its frame of reference for theirs.
When asked to complete a similar task without AI afterward, their cognitive patterns resembled those of beginners rather than those who had just engaged in complex reasoning. The AI had framed their perspectives and diminished their ability to form them independently.
This phenomenon isn't just happening in research labs. It's unfolding in doctors' offices, where physicians rely on AI to suggest diagnoses; in law firms, where attorneys draft arguments with the assistance of language models; and in boardrooms, where executives rely on AI-generated reports to make multimillion-dollar decisions.
At first glance, this seems like harmless efficiency, a form of cognitive offloading not unlike using calculators to free us from basic math.
Yes, AI can streamline routine tasks and reduce friction where expertise is already strong. But the danger is more subtle.
The study suggests participants offloaded effort and judgment.
Without us knowing, AIs can insidiously take over the mental processes that define human expertise, eroding the ability to develop and sustain those skills independently.
It's not surprising that Kos’myna’s findings sparked extensive discussion and media coverage that amplified public fears. Here's a sampling of the headlines:
From The Hill: "ChatGPT Linked to Cognitive Decline." From Fast Company: "Reliance on ChatGPT Might Be Really Bad for Your Brain." From Axios: "AI's Great Brain-Rot Experiment." And this zinger from David Brooks at The New York Times: "Are We Really Willing to Become Dumber?"
If you follow the media filter, the conclusion is stark: ChatGPT is making us cognitively weaker, and continued use will leave us mentally diminished.
An Amplified Alternative
Here's what the doomsayers miss: AI doesn't have to diminish our intelligence. They can amplify it — if we learn to work with them consciously rather than passively.
The same MIT study offered a hopeful insight. Some in the study used ChatGPT as a cognitive amplifier rather than a replacement.
Rather than uniformly decreasing mental effort, participants who had previously written without the aid of AI showed a significant increase in widespread brain connectivity when they began using the model.
These people actively integrated the AI's suggestions into their thought process—a more demanding cognitive task than writing from scratch or passively editing AI output. The key difference? They treated the AI as an aid, not the authority.
When I ask ChatGPT to analyze a complex issue, I never accept its initial framing of the problem. I ask it to argue the opposite position. I request analyses from different cultural or historical perspectives. I probe its assumptions. Then, I synthesize what I've learned, relying on my knowledge and judgment.
It takes longer than simply accepting an AI's initial output, but it preserves the cognitive skills that passive use will erode. Amplification over efficiency is what makes AIs work for us.
Discovery vs. Search
What if discovery, not search, became our default mode of working with AIs?
Imagine medical schools that teach students not just to use AI diagnostic tools but to understand how these frame medical problems — and what they might miss.
Picture law schools training lawyers to generate multiple AI-assisted arguments, then craft their nuanced synthesis, capturing the subtleties that machines overlook.
Envision business schools teaching executives to use AI for scenario planning while preserving the strategic intuition that comes from seeing what agents can't.
Yes, we should be cautious about using such powerful technology. But even more, I've personally found it's about being intentional, consciously shaping how to engage with AIs to shape and expand thinking.
The goal isn't to work faster or think less. It's to preserve and amplify our thinking that gives work meaning and value: the doctor's clinical intuition, the lawyer's sense of justice, the executive's strategic wisdom, and the teacher's deep understanding of their students.
Your Cognitive Future
The next time you ask ChatGPT for help, try this: Don't take its initial framing at face value. Ask it to argue the opposite view. Request the same analysis from different perspectives. Step back and consider what it missed, what it got wrong, and what only you — with your unique experience and judgment — can bring to the conversation.
Your brain will thank you for the exercise. And you'll be better prepared for a future where human and machine intelligence fuse—without letting an AI’s framing quietly displace your own.
AI poses a King Midas dilemma. In the myth, Midas's touch turned everything—even his food and his daughter—to gold, so he could neither eat his food nor embrace her.
Unless we stay vigilant, every raw idea risks being "turned to gold" by AI, leaving us unable to instigate or shape our thinking.
Machines can generate possibilities and frame problems. But only we can decide which possibilities and frames matter — and why.
Nataliya Kos’myna will join Andus Labs ‘After Now’ assembly on July 23rd. She will discuss research findings that influence how leaders redesign workplaces and educational systems with the aid of AI. See more and sign up here.
Love this POV, Chris.