Organisations are discovering that implementing AI and achieving productivity gains are different challenges entirely. Models deliver speed, lower unit costs, and new scale. Yet many find that throughput rises whilst resilience, judgement, and trust lag behind. This is the productivity paradox—and solving it requires rethinking work design from first principles.

Athalie Williams, a former CHRO and transformation executive with decades of experience leading complex organisational change, argues that the critical work isn’t technological. It’s architectural: deciding which human capabilities to preserve, which to redesign, and how to build organisations where people orchestrate intelligent systems rather than compete with them.

“Technology operates differently than humans and will continue to evolve rapidly,” Williams notes. “Models infer patterns and optimise for specific objectives. Empathy, moral judgement, narrative sensemaking, and ethical trade-offs are not extensions of machine capability, they are a different blueprint altogether.”

The Messy Middle Ground

Most organisations approach AI deployment with a binary mindset: either automate the task completely or leave it untouched. This creates a false choice that overlooks the most valuable territory—the space between full automation and traditional human execution.

Williams advocates for a three-category taxonomy: agent-operated work, hybrid work, and human-led work. The challenge lies in determining which activities belong in each category, and then redesigning roles, teams, and processes accordingly.

“Not all ‘human’ skills are equally scarce or equally valuable,” she explains. “For each decision type, leaders should answer two straightforward questions: which human skills must we preserve because machines can’t reliably replicate them, and which human activities should be redesigned so people supervise, guide, or orchestrate AI rather than doing the task end to end?”

This distinction matters because it determines where organisations invest in capability development, how they structure teams, and what they measure as performance.

Context Determines Design

The choice between automation and human involvement isn’t ideological, it’s contextual. Williams suggests a practical framework based on two variables: ambiguity and consequence.

Low-ambiguity, low-consequence tasks should be automated at scale: routine data entry, standard report generation, predictable workflow management. The returns are clear and the risks minimal.

High-ambiguity, high-consequence decisions require human leadership by design: strategic choices under uncertainty, ethical trade-offs with long-term implications, interventions during system failures where stakes are significant.

The interesting categories are the middle ground: low-ambiguity but high-consequence work might be automated with strict human checkpoints, whilst high-ambiguity but low-consequence tasks could operate as hybrids where humans verify or override system recommendations.

“Deciding which decisions must remain human-led, and who is accountable for them, is a strategic governance question,” Williams emphasises.

Real-World Role Redesign

What does this look like in practice? Williams points to customer service as an illuminating example. Routine queries can be handled by AI-generated responses, but cases involving vulnerable customers, ethical trade-offs, or complex failures require human judgement.

The role doesn’t disappear—it transforms. Customer service agents shift from responding to every inquiry to supervising AI-generated responses, intervening when nuance or discretion is required. Their expertise becomes more valuable, not less, because they’re deployed where human capability matters most.

Similarly, in credit underwriting, routine rule checks can be redesigned so underwriters orchestrate model outputs rather than manually processing every application. They focus on anomalies, edge cases, and situations where models lack sufficient data or where ethical considerations override algorithmic recommendations.

This redesign requires clarity about what constitutes an exception, who decides when human intervention is warranted, and how those decisions are documented and reviewed. Without that clarity, hybrid roles create confusion and slow adoption.

The Dynamic Boundary

Williams cautions against treating the line between “preserve” and “redesign” as fixed. Capabilities once thought uniquely human continue to shift as technology advances. Voice synthesis can now replicate emotional tone. Language models can draft communications that pass for human-authored. Image generation can create novel designs based on natural language descriptions.

“Treat the line between ‘preserve’ and ‘redesign’ as dynamic,” she advises. “Boards should expect this boundary to evolve and ensure governance frameworks shift with it.”

This creates ongoing work for organisations: regularly reviewing which capabilities remain distinctly human, which have become hybrid, and which can be fully automated. It’s not a one-time redesign but a continuous process of architectural evolution.

Building the Orchestration Capability

If more work becomes hybrid—humans orchestrating AI rather than executing tasks end to end—organisations need to build a new capability: the skill to interrogate model outputs, recognise when algorithms are operating outside their competence, and make informed judgements about when to override system recommendations.

This represents a different leadership muscle than traditional execution capability. It requires understanding how models work without becoming a technical expert, developing intuition about when outputs seem plausible but wrong, and having the confidence to intervene when something doesn’t align with organisational values or stakeholder needs.

“Help leaders build the muscles to interrogate model outputs, convene diverse perspectives, and lead ethical trade-off conversations,” Williams recommends. “This is a leadership skill, not a technical qualification.”

Organisations that invest in this capability create competitive advantage. Their people don’t simply accept what systems recommend—they add value by exercising judgement about when recommendations should be followed, adapted, or rejected.

The Human-Centred Design Imperative

Williams draws attention to research highlighting the importance of psychological ergonomics: designing systems that meet human needs for security, growth, and significance. These needs shape how people respond to change. Fear can stall AI adoption even when the technology is sound.

Recent examples demonstrate this principle in action. Organisations that co-create changes with employees, rather than imposing them, build ownership and clarity. They treat role redesign not as headcount reduction but as capability evolution, helping people understand how their expertise becomes more valuable in AI-enabled environments.

“The changes were co-created to build ownership and clarity,” Williams notes of one company’s approach. “It’s the kind of signal boards should look for when reviewing workforce plans: not just what’s changing, but how it’s being led.”

This approach acknowledges that humans aren’t infinitely adaptable. People need time to develop new skills, clarity about expectations, and support during transitions. Organisations that ignore these needs discover that technological readiness doesn’t translate to organisational adoption.

The Cost Conversation

Williams is direct about the implications: “Investing in humanness may reduce margins in the short term, but boards that treat it as messaging, rather than capability, will struggle to sustain it.”

Redesigning work requires investment in learning, process change, and potentially accepting slower efficiency gains during transitions. It means resisting the temptation to optimise for immediate cost reduction at the expense of sustainable performance.

The trade-off is real. Organisations that preserve and develop human capabilities where they matter most build resilience and trust. Those that automate indiscriminately discover that efficiency gains prove brittle when systems encounter unexpected situations or when stakeholder trust erodes.

“Human errors should be treated with the same rigour as model risk,” Williams notes, acknowledging that humanness has costs. The goal isn’t perfection, it’s designing systems where humans and AI contribute what each does best.

Making It Operational

For organisations serious about redesigning work, Williams recommends starting with role architecture: classify work into agent-operated, hybrid, and human-led categories, then align hiring, learning, and performance expectations to that taxonomy.

This classification forces specificity. It’s not enough to say “we’re using AI to enhance productivity.” Leaders must articulate which tasks are being automated, which are becoming hybrid, and which remain human-led, and importantly why.

Clear role architecture reduces ambiguity and accelerates alignment. Employees understand their relationship to AI tools. Managers know what to measure and how to evaluate performance. Organisations create consistent expectations across teams rather than leaving role design to individual interpretation.

The Transformation That Matters

The productivity paradox exists because organisations treat AI as a technology challenge when it’s actually a work design challenge. Deploying models is straightforward. Redesigning roles, building orchestration capability, and creating the conditions where humans and AI complement each other is the hard work.

Williams’ framework provides a starting point: classify work deliberately, preserve human capabilities where they’re irreplaceable, redesign roles to orchestrate rather than execute, and invest in the leadership capability to make these distinctions well.

For organisations navigating intelligent automation, the question isn’t whether to adopt AI. It’s whether they’re willing to do the architectural work that makes adoption valuable: redesigning how work gets done, who does it, and what success looks like when humans and machines collaborate.

The organisations that solve this will discover that the productivity paradox isn’t inevitable. It’s what happens when technology advances faster than work design. Close that gap, and the returns from AI investment follow.

Previous post Buckingham Palace Visitor Guide: State Rooms, Ceremonies & Visit Tips
Next post Arsenal’s Blueprint: Building a Modern Football Identity with Youth, Tactics & Analytics

Leave a Reply

Your email address will not be published. Required fields are marked *