Yet as AI moves from experimentation to everyday use, a familiar pattern is emerging. Lewis Silkin's recent Future @ Work 2026 report reveals that many organisations are investing rapidly in technology while underinvesting in the capabilities needed to deploy it responsibly. Meanwhile, Ius Laboris' Managing the Machine report on AI and regulation shows that others remain hesitant, waiting for regulatory clarity before taking decisive steps. These competing dynamics are creating a widening gap between ambition and readiness, and that gap has serious consequences.
This challenge has serious consequences for the world of work. Employees increasingly face decisions shaped by AI systems yet find themselves caught between accelerated deployment and delayed governance, with limited visibility into how those decisions are made or challenged. The transition from innovation to accountability is no longer approaching: it is already underway.
The people and governance gap
The Future @ Work 2026 report reveals a stark imbalance: 74% of employers continue to invest heavily in AI technology while underinvesting in workforce capability. Furthermore, while many widely acknowledge the importance of human-centred skills, such as critical thinking, ethical judgement, creativity, or cross-functional collaboration, far less attention is paid to building the organisational capacity required to govern AI in practice.
This is not simply a skills issue, but a genuine governance challenge. Effective oversight depends on people who understand how AI systems function, where their limitations lie, and how risk can manifest in real-world contexts. It consequently requires managers who can interrogate algorithmic recommendations rather than deferring to them, as well as HR teams that can explain how AI-assisted decisions are made, and leaders who can identify when those processes fail.
Without this capability, governance frameworks remain largely theoretical. Policies may exist on paper but struggle to shape behaviour in practice. Similarly, risks may be formally acknowledged but poorly understood and inadequately addressed. And when regulators, tribunals, or employees ask questions about how decisions were reached, organisations risk finding themselves unable to provide credible answers.
In this sense, AI is acting as a stress test for existing organisational maturity: where capability is thin, the gap between stated readiness and actual control becomes quickly apparent.
The regulation mirage
Ius Laboris’ Managing the Machine report details how, in response to this uncertainty, some employers have chosen to wait. With regulatory frameworks still evolving, the instinct to pause investment in governance until the rules are settled is clearly understandable.
However, this approach misreads both the regulatory landscape and the nature of compliance. While the EU AI Act is now in force and other jurisdictions are developing their own approaches, comprehensive regulation remains uneven across markets. More fundamentally, legislation alone does not create good governance.
Managing the Machine provides useful examples from multiple jurisdictions showing that rules are only as effective as the institutional and organisational capacity supporting them. Where enforcement is limited or internal capability is weak, even well-designed laws struggle to deliver meaningful outcomes. Regulation, as a result, can set expectations, but cannot be a substitute for internal systems, leadership judgement, and workforce understanding.
For employers, especially those operating across borders, the implications are clear: waiting for regulatory certainty is unlikely to reduce risk, and the organisations best positioned to navigate this transition are those building their own governance foundations now, grounded in principles that can flex across jurisdictions rather than relying on compliance as a last step.
What employers should prioritise
Despite regulatory variation, the core challenges employers face remain remarkably consistent. Across regions, for instance, the same questions recur: how do we ensure transparency? How do we explain AI-assisted decisions? How do we identify and mitigate bias? And how do we maintain meaningful human oversight.
This consistency creates an opportunity. Rather than developing fragmented responses for each jurisdiction, employers can build a common governance baseline that meets high regulatory expectations, while remaining adaptable to local requirements.
In practice, this means focusing on four areas.
First, clear AI policies and acceptable-use frameworks. Employees need practical guidance on which tools they can use, for what purposes, and with what safeguards. This is especially important as generative AI tools become embedded in everyday work, often beyond the visibility of legal or IT teams.
Second, sustained investment in capability building. Governance depends on people, not documents. AI literacy for HR professionals, managers, procurement teams, and employees is foundational, not optional.
Third, robust vendor and procurement processes. Most workplace AI systems are purchased rather than developed in-house. Employers need to understand how tools operate, what data they rely on, and what contractual protections are required to support transparency and accountability over time.
Finally, and perhaps most importantly, meaningful human oversight mechanisms. Regulators and tribunals increasingly expect evidence that humans remain genuinely in control of consequential decisions. This requires going beyond merely formal review steps to build the capability and confidence to question, challenge, and override algorithmic outputs where appropriate.
From readiness to accountability
As the regulatory landscape keeps shifting, and the environment in which organisations operate becomes increasingly less predictable, the window for thoughtful preparation is narrowing. Organisations that treat AI governance as a compliance exercise, or defer action until regulation forces their hand, risk finding themselves exposed as AI use becomes more visible and more consequential.
Those who invest now in people, capability, and governance structures will be better positioned to manage risk, unlock value, and maintain trust. AI in the workplace is no longer experimental. The question for employers is whether their governance has evolved quickly enough to match its impact.
Download our latest report - Future @ Work 2026: Building for future readiness
