What AI actually changes about work and what it doesn’t
Directionally Correct Newsletter, The #1 People Analytics Substack
By: Angela Le Mathon, Cole Napper, & Alexis Fink
Here’s the core message of the article, and it’s quite simple and uncomfortable:
AI absolutely changes how work gets done, but it also exposes how poorly work itself is defined and understood inside most organizations.
More concretely:
Job descriptions rarely reflect the reality of work
Most work isn’t linear, digital, or cleanly task-based
Mediocrity and inertia are often structurally rewarded over excellence
Standardization is favored over experimentation
The “talent shortage” narrative is often more fiction than fact
Power, ego, and control frequently outweigh meritocracy and outcomes
Work is full of undocumented workarounds and tacit knowledge
Our sense is that, as a consequence, many AI transformations stall not because of the technology, but because humans want to retain control over progress. There’s a preference for the illusion of what work really is over its messier, more-political reality. This is not a “5 steps to success” feel-good article.
Organizations need to adapt. But will they? We intend to give leaders a roadmap to navigate the changes that constitute “AI workforce transformation” (already the buzzword of 2026) – what will change, what will stay the same, and where do we go from here?
Section 1: What AI Actually Changes
The first challenge in navigating organizational adaptation to AI is clarifying what AI is, what it can do and crucially, what it requires.
Much of the public discourse frames AI’s impact in terms of jobs lost. In practice, this framing is misleading. In most cases, jobs are changed rather than eliminated. Predicting that 12% of work can be automated does not imply that 12% of an organization’s headcount can be removed. While some roles will disappear, the dominant effect of AI is the reshaping of work – hence the term “transformation” – and the creation of new capabilities often layered on top of existing roles.
What AI truly changes is not employment in the abstract, but the mechanics of work.
1. How work gets done (task distribution)
AI rarely replaces entire jobs. Instead, it replaces or augments specific tasks.
Tasks best suited to automation tend to be high-volume, repetitive, low-risk, and low in contextual complexity. As novelty, risk, ambiguity, and consequence increase, the need for human oversight grows – which in turn means that AI usage creates entirely new jobs as well as changing existing ones. This distinction matters because different problems require different kinds of AI (there’s an entire article to be written on just this topic).
Where precision, repeatability, and safety are critical for example, in regulated or safety-sensitive environments, deterministic systems – often more familiar forms of advanced technology like machine learning – are often most appropriate. Where exploration, pattern discovery, or creativity is needed, probabilistic systems such as LLMs can be useful.
Across both cases, humans remain central. They are required upstream to prioritize use cases, define success criteria, curate training data, and establish governance and validation frameworks. They are also required throughout execution: approving decisions, handling exceptions, interpreting outputs, and correcting errors.
Human–AI collaboration is not passive. It demands judgment, discernment, creativity, empathy, and dexterity. AI does not remove human effort; it reconfigures it.
2. The speed and scale of decisions
AI changes not just what decisions are made, but how fast and how broadly they propagate.
Errors scale in two distinct and compounding ways.
First, there is a speed problem. AI-enabled systems operate at a velocity that far exceeds human response times. If an automated pricing engine – untethered from approval processes – begins offering products below cost, financial losses can accumulate rapidly before governance mechanisms intervene.
Second, there is a scale problem. As organizations mature this AI adoption, they deploy networks of agents handling dozens or hundreds of interdependent tasks. Even when individual agents perform well, small inaccuracies can compound across the system, producing outcomes that are significantly worse than expected.
This creates a new burden: humans must build systems to continuously monitor, interpret, and intervene in systems whose complexity and speed exceed intuitive understanding. Sometimes this means layering systems on top of systems – increasing complexity and the number of ways things can break.
3. The boundaries of roles
As AI becomes ubiquitous, familiarity and demonstrated skills with these tools becomes a form of baseline literacy.
Just as computer usage shifted from a specialized skill to a standard requirement, the ability to design, deploy, supervise, and work alongside AI systems will increasingly be expected across roles. An individual’s area of expertise may define what they do but how they do it will increasingly involve effective interaction with AI.
This upends traditional role boundaries, where such activities were specialized in IT and CS functions, and quietly adds responsibility to many jobs without formally redefining them.
4. How feedback flows through the system
AI requires explicit feedback loops.
Work becomes more iterative, more data-driven, and more interdependent. Outputs must be evaluated, corrected, and refined often by humans who did not generate the original input and may not fully understand the system producing the result. Especially when enormous LLM’s are used, the complexity of such systems means it is impossible to fully understand how a specific result is generated.
This introduces a new form of work: continuous sense-making.
5. Where accountability must be redesigned
AI blurs the question of who did the work while sharpening the question of who is responsible.
Decision rights, escalation paths, and governance structures must evolve. Accountability cannot be outsourced to a model. Leaders must clearly define ownership for outcomes, especially when AI systems influence high-impact decisions. This is in part a corporate liability problem, but it is also much broader.
The deeper shift: adaptive labor
Taken together, these changes reveal something often overlooked.
AI did not reduce human effort, it reallocated it.
As AI is embedded into workflows, humans are required to examine model outputs, decide when to trust or override them, absorb new forms of risk, change behavior under uncertainty, and exercise judgment continuously rather than occasionally.
This creates a new, largely unacknowledged cost: adaptive labor.
AI introduces an “AI tax” a cognitive and emotional load paid by humans to make AI useful, safe, and productive. The true bottleneck in AI transformation is not model capability, but human adaptive capacity.
Section 2: What AI doesn’t change
Like previous industrial revolutions, AI will fundamentally reshape large sectors of the economy and society. However, much about organizations will endure through this period of transformation.
At their core, organizations are the way humans come together to solve problems that couldn’t be solved individually. So, while it’s possible to have companies or products that are just an AI, or just a single person and their AI “minion”, by definition that’s not an organization. And, organizations – groups of people working together – are not at risk of becoming relics.
AI is excellent at many things, but at the simplest level, AI is just math. Enormous amounts of very complicated math based on inconceivable volumes of data, but math just the same. This is a superpower, but also a limitation.
At least right now, humans retain distinct advantages over AI in several critical domains. While AI can brute force their way through iterations of important permutations like DNA and protein folding, the systems generally lack genuine ingenuity and creativity. This limitation limits the ability of these systems to make good judgments in novel situations.
To date, people are still required to coordinate functions and to motivate and align performance. And, as demos often hilariously demonstrate, AI systems struggle with dexterity – robots won’t be changing exciting diapers on squirmy, slippery babies any time soon, as much as parents might wish to automate that task.
1. Judgment in ambiguous or novel situations
AI is a powerful pattern recognition tool, especially for tasks that repeat in fairly predictable ways. Yet, these systems are brittle with nuance.
While there is an enormous amount of fairly routine, predictable work that can be automated, organizations will often succeed or fail based on the choices they make when conditions are ambiguous or novel. AIs famously struggle when they encounter new puzzles, and maintaining a competitive advantage as an organization is in many ways an ongoing, novel puzzle.
2. Incentives and culture
It’s often been said that “culture eats strategy for breakfast”. The same will be true for culture’s impact on AI. Organizations awash in fear are unlikely to embrace AI and angry or frightened employees may even sabotage AI projects. Highly territorial organizations will struggle to build processes that cross boundaries in order to capitalize on the capabilities of AI – especially when doing so may disadvantage one leader or one team compared to another. Culture remains the lever that drives or derails adoption.
3. Accountability for outcomes
Leaders often talk about the importance of accountability. Automation blurs accountability. When an AI fails, who is responsible? The company that provided the tool or platform? The engineer who built the local application? The person who curated the training data? The program manager who uses the system? If a new system dramatically improves performance, who gets rewarded? If it fails catastrophically, who gets fired? Accountability – especially legal accountability – doesn’t shift to the machine. Ultimately, leaders and the overall organization assume legal liability for high stakes decisions and must remain legally, ethically, and operationally accountable.
4. The need for clarity of purpose and values
As students of organization, we find it fascinating how seemingly similar organizations – similar in resources and product portfolio – can have such dramatically different outcomes even in normal circumstances. While AI can be an important tool for coordination, an excellent thought partner for exploring options, and a helpful tool for personalized communication at scale, we still need humans to make strategic choices, and align organizations. Humans set and reinforce clarity of purpose, and reinforce what behavior is celebrated or punished.
5. Engagement with the physical world
Our current AI models are largely trapped in the world of data and digital transactions. And, while purpose-built robots and machines have been enormously capable at very specific tasks – most notably manufacturing – for decades now, their capabilities are limited to exactly what they were designed to do. An auto-assembly robot cannot fold your laundry (yet).
We often take for granted our human dexterity. We can crack eggs for omelets without a thought. Humanoid robots – designed to operate in a world designed for humans – remain clunky and awkward. For now, physical dexterity and touch remain the domain of humans.
Simply put, here’s what AI cannot do for your organization:
AI can rewrite workflows, but not establish your values.
AI can accelerate the ease of making decisions, but doesn’t define the goals.
AI will automate tasks, but will not defer responsibility and accountability.
AI will change capability requirements of jobs, but not human judgment.
Section 3: The New Rules of Work
Let’s just say it: There will be winners and losers in this race.
Mechanization put a lot of farm workers out of jobs. Similarly, AI changes how knowledge work gets done. And work will increasingly be shared by humans and technology alike. The new rules of work, funnily enough, are what the old rules of work should have been all along.
Rules of work:
Before doing anything, get your data about work and workers in order (i.e,. “work intelligence”)
Jobs will still exist, but will be completed in new and different ways – although the concept of work will largely start to get deconstructed as AI matures – and work itself may become the dominant unit of analysis as opposed to a “job”
Humans will do human things, and technology will do technological things – The more “digital” the task, the more automatable it will likely be
Collaboration is key – Intra-human collaboration and human + AI collaboration as well. Nothing happens without both.
What has to change within organizations to make this transformation possible?
AI turns traditional organizational power structures on their head. It really does. Senior leaders in organizations got to where they are by navigating organizational politics for ten, twenty, thirty, and sometimes even forty years. Do you think they are going to give up decision making to AI because some HBR article told them to? Doubtful.
The core insight leaders need to navigate AI workforce transformation with confidence is by HR functions establishing a foundation of capability orchestration: deliberately aligning human judgment, organizational design, and technical systems so AI can scale safely and productively. This is a fundamental reset for HR and how it delivers value. Most HR functions probably need assistance on the transformation journey.
And we are here to help. You know where to find us.
I hope you like this article. If so, I have a few more articles coming out soon. Stay tuned. If you are interested in learning more directly from me, please connect with me on LinkedIn.
Cole’s recent articles
Skills Management and Capability Orchestration for Redesigning Work in the AI Era
The Global Wage Rate Is Coming: How We All Get Paid the Same
For access to all of Cole’s previous articles, go here.



This is such a nuanced take on AI's impact! I love how you break down the distinction between tasks that require cognitive dexterity versus those that are more routine. The framework you've layed out here really resonates with what I'm seeing in my own workplace. It's refreshing to read something that doesn't just hype AI or doom-monger, but actualy explores the complexity of how it reshapes work.