Recognition Without Readiness
The Hidden Risks in an AI Economy
On March 11, 2020, Tom Hanks announced he tested positive for the coronavirus. That night, an NBA game stopped when players were pulled off the court. Only then—after weeks of lockdown reports from China and Italy, after the World Health Organization’s urgent bulletins of ‘disruptions to everyday life’—did much of America register what was happening. Reality arrived not through science, but through celebrity.
By then, it was too late to prepare. Despite unmistakable signs, an NBC poll found more than half of Americans still believed their lives would change little—if at all.
We know what happened in hindsight. Homes replaced headquarters. Corporations shifted to Zoom. Playgrounds to Fortnite. Shifts anticipated in the future became reality in days.
We’re in a similar moment with artificial intelligence. Signs of disruption are everywhere, scattered across industries like breadcrumbs—law firms quietly feeding document review to LLMs, consultants shedding employees they cannot retrain, junior analysts delivering advice that previously required decades to master.
The transformation is happening now. Yet, as with the early days of the pandemic, recognition is not the same as readiness.
The New Lens of Work
AIs alter more than tasks; they alter perception. These are perspective agents. They’re filters that shape what we notice, what counts as valuable, and what falls away. A lawyer who uses AI to review contracts doesn’t just speed up the job—she begins to redefine which parts of her expertise are essential. The junior consultant, capable of generating senior-level analysis, displaces hierarchies of expertise.
Research reinforces the need to broaden our perspective. Worklytics reports that 69% of knowledge workers use AI tools for research, content creation, and analysis. Gallup finds that adoption at work has nearly doubled to 40% of all employees in just two years.
Yet, according to KPMG, 57% of workers reported hiding their AI use from their bosses—‘AI shame’—while only 7.5% say they have received adequate AI training. It’s not surprising that MIT found that 95% of enterprise AI pilots fail to produce a positive impact. The pattern is familiar: we see it coming, but we don’t fully prepare for it.
Consider OpenAI’s recent professional benchmark report. Legal, financial, and retail experts with an average of fourteen years of experience designed tasks that required four to seven hours of human work. When the same tasks were attempted by both the experts and AI, humans prevailed, but only by a slim margin.
AI’s shortcomings weren’t in reasoning but in surface issues: formatting, presentation, and following instructions to the letter. These flaws will gradually be eliminated with each new model. More telling today: hybrid workflows combining AI and human review delivered results 40% to 60% more efficiently than people alone.
Displacement of Expertise
Until recently, AI’s promise was hobbled by its brittleness—one small error could topple an entire chain of tasks. The new generation of agents is different. They can plan, call on other agents, collaborate, and self-correct.
In his analysis of OpenAI’s report, Ethan Mollick, Co-Director of the Generative AI Labs at Wharton, described watching Claude 4.5 ingest an economics paper, parse it, rewrite the accompanying code, and rerun the analysis. It reproduced findings from the paper in minutes. For research fields long plagued by replication challenges, the implications seem remarkable.
Mollick noted the same models, given a single memo, produced seventeen PowerPoint decks—slick, well-formatted, and empty. The same tools used for clarity can overwhelm us with noise.
In consulting, experience was the ladder to authority. Years of meticulous analysis lead to sound judgment. Today, that staircase is being dismantled, rung by rung.
Accenture’s CEO, Julie Sweet, put it bluntly in her latest earnings call: “We are exiting people on a compressed timeline where reskilling, based on our experience, is not a viable path for the skills we need.” She described a change so significant it reverses “five decades of how we’re working.”
The chainsaws are out. Expertise is being dismantled and dispersed, carrying consequences that reach beyond economics into our psychology. What does it mean when machines can replicate decades of expertise in minutes?
Three Principles for Readiness
Perhaps a better question returns to judgment, something that AI cannot replace. Consider what happens without it. The Harvard Business Review coined a term for the influx of AI-generated reports, memos, and slide decks: workslop. It feels polished, formatted, grammatically sound—and nearly useless. It’s organizational filler, work that fills the inbox without adding value.
The temptation is understandable. When employees can generate dozens of proposals in the time it once took to draft a single one, output swells. But output isn’t insight. The danger isn’t job loss—it’s judgment loss. Volume replacing reflection.
What if AI, deployed without discernment, accelerates us toward workplaces that prize velocity over value? Where the pressure to match machine speed eliminates time for reflection? How do we stop torrents of AI-generated content from washing away valuable expertise?
The path forward requires deliberate choices. First, organizations must invest in human-AI collaboration rather than replacement. We must train people not just to use AI tools, but to know when to refrain, when to slow down, and when to insist on reflection over velocity.
Second, we must reinforce markers for valuable work. If we reward just efficiency and speed, we will inevitably get workslop. We must distinguish between AI-generated output that enhances intelligence and what’s just surface insight. It calls for simple but essential questions: Does this work create value or check boxes? Does it protect human judgment or automate it away? Does it move us closer to meaningful progress, or just faster toward irrelevance?
Third, individuals must become perspective agents themselves—conscious architects of how AI shapes their work. It requires regularly stepping back to ask: What problems am I choosing to prioritize? What steps am I no longer considering? What kinds of thinking am I offloading, and what am I keeping?
The goal isn’t to resist AI but to direct it intentionally, ensuring that it serves us rather than replaces us.
The Question of Choice
Marshall McLuhan once observed that “there is absolutely no inevitability as long as there is a willingness to contemplate the situation.” With COVID, we saw the signals but mistook recognition for preparation. We failed to build new systems, prepare people, or ask hard questions about what mattered most. The result was chaos.
Will we treat this moment as a warning or as another Tom Hanks moment—an acknowledgment too late to matter?
Unlike a virus, AI has no biological inevitability. It can be directed and framed with intention. The choice lies not in the algorithms but in our hands: like the work we value and the necessary judgment we must preserve.
COVID-19 taught us that the biggest changes occur in places we cannot see—behind the scenes, in our assumptions, in policy meetings, in systems that organize human effort.
For leaders, the question isn’t whether we can see the changes coming—it’s whether we’ll act on what we find. How we work, learn, and decide is being rewritten. Our only choice is whether we shape that change or let it shape us in the end.
I’ve explored these ideas further in Perspective Agents, a book that presents frameworks and cases tracing how AI unsettles and redefines the very idea of work. Thanks for reading.

