AI agents fixing prices, colluding with each other, stealing credentials from other users, fabricating e-mails, and even hiding messages within ordinary text without users knowing.
While these might sound like dystopian findings, which sit half-buried in a foresight paper published by the Digital Regulation Cooperation Forum (DRCF) on 31 March 2026, The Future of Agentic AI, these are real behaviours that can be observed in some frontier models which are already in commercial use.
For now, the AI agents' behaviour was observed in simulated conditions, but the DRCF is looking proactively at future developments and how to regulate to stop situations like the above occurring.
The DRCF's mantra? "Safe and responsible" adoption by businesses.
If you have adopted agentic AI or are looking to do so, the paper is an important read to head off compliance headaches before they arise.
Below, we set out our thoughts on the paper and what businesses need to be thinking about.
Status of the paper: quiet warnings, clear direction
Although the paper is framed modestly – it wants to "foster debate and discussion" – readers shouldn't be fooled by the diplomatic packaging. It represents a clear indication of where the CMA, FCA, ICO and Ofcom see the risks and opportunities in agentic AI and, by implication, where businesses should be focussing their compliance efforts.
While the paper "should not be taken as an indication of current or future policy by any of the member regulators", our view is that these quiet warnings still carry some weight. The destination may not yet be fixed, but the direction of travel is becoming increasingly clear.
The case against boilerplate prohibitions
Firstly, we hope that the paper gives businesses a sharper sense of what agentic AI means in practice (and in contracts), and what the risks and opportunities are.
Out of an abundance of caution, or a desire to manage risk, or both, we've noticed that many large corporate buyers now impose blanket bans on their suppliers' use of agentic AI. The clause has spread so widely it's now almost boilerplate.
The trouble is that such bans treat all agentic AI as a single, undifferentiated threat. Consider a supplier that deploys an AI agent entirely within its own systems to manage scheduling, invoicing or stock replenishment. This tool never touches the buyer's data, but a blanket contractual prohibition makes no allowance for this, barring tools that pose no conceivable risk to the buyer's information.
That bluntness carries a cost. The paper cites a large-scale study of a generative AI assistant in customer support, which found a productivity gain of roughly 15%, measured by the number of issues resolved per hour.
The upshot? Contracts that prohibit all agentic AI, without distinction, will slow adoption of tools capable of delivering gains.
Key themes
Although the paper doesn't consider such contractual issues, it does highlight many other key themes, including the need to:
- Build governance in from the start, not as an afterthought. AI agents need to produce auditable records. Human involvement in decisions carrying legal or significant consequences must be real, not just a case of rubber-stamping.
- Define when machines need to defer to people. Clear thresholds matter. This is especially pressing where outcomes could alter someone's legal position.
- Ration data. Agents, like employees, should operate on a need-to-know basis to comply with the UK GDPR's data minimisation and data-protection-by-design principles. Excessive permissions are a gift to any adversary looking to escalate privileges.
- Say what the agent does. Businesses deploying agentic AI should tell consumers plainly that they're doing so.
- Make the reasoning visible. Observability gives businesses something to point to when regulators come asking. (The ICO has published guidance on AI explainability. See here.)
- If dealing with consumers, give them genuine control. People should understand what they've consented to or delegated. Under the Consumer Rights Act 2015, agentic AI providers must still meet minimum standards of quality, performance, and fitness for purpose.
- Harden the security perimeter. Respondents to the call for input recommended layered security: rigorous testing, threat analysis, red teaming (simulated attacks), and combining traditional cybersecurity controls with tools built for autonomous systems. Organisations should also scrutinise what level of data and system access they hand to agentic tools, and watch for signs of malicious agent activity.
- Embrace open standards. Supporting interoperable protocols such as MCP and A2A reduces the risk of vendor lock-in and keeps markets competitive.
The paper also sets out key things to avoid when implementing agentic AI:
- Don't ignore collusion risks in pricing. As mentioned above, research has shown that LLM-based agents can spontaneously converge on supra-competitive prices in simulated markets without anyone telling them to. Businesses deploying agents in pricing roles need monitoring tools adequate to the risk and should assume regulators will eventually ask pointed questions.
- Don't let agents become black boxes. An agent whose reasoning can't be explained or challenged is a liability under consumer, contract, and data protection law.
- Don't cut corners on consumer law. Overstating a system's capabilities may breach consumer protection rules. Terms of service must be fair, not buried. Cancellation terms, subscription renewals, and other material conditions should be surfaced clearly.
- Don't leave people behind. Agentic AI risks widening the digital divide if accessibility is treated as an afterthought. Businesses should build in accessible design and personalised support from the outset, not as a patch applied once complaints arrive.
The paper's central message is straightforward: businesses that weave fairness, transparency, accountability and data protection into the fabric of their agentic AI – at the design stage, not after launch – will earn consumer trust, scale without regulatory collision, and compete on the quality of what they actually deliver.
It's also worth bearing in mind that a single agentic AI deployment could simultaneously trigger concerns under data protection law (ICO), financial regulation (FCA), online safety duties (Ofcom), and competition and consumer law (CMA). The DRCF paper itself makes this point: a retail assistant powered by agentic AI, for example, could activate cross-regulatory concerns across all four regulators at once.
Don't forget about existing guidance
Bear in mind that the paper is by no means the final word on agentic AI. As we reported recently in this article, the CMA has already published guidance, on 9 March 2026, setting out practical steps for businesses that deploy agentic AI.
The CMA also published research to accompany the guidance which reiterates that the greater autonomy in AI agents "also brings greater responsibility" and that "trust is critical infrastructure for adoption, investment and growth."
And as we note in our article, it's crucial that businesses take note of this guidance, because the CMA might well use its new enforcement powers with fines of up to 10% of annual global turnover.
Strong on risks, light on remedies
Whilst the paper is a thorough piece of horizon scanning, it's not without its weaknesses.
It vividly catalogues novel and serious risks, yet its proposed next steps amount to little more than further research and continued dialogue.
There's also an unresolved tension at its core: it says existing regulatory frameworks remain adequate, whilst describing harms that appear qualitatively different from anything those frameworks were designed to address.
With legislation trailing behind the technology, the task of managing agentic AI risk falls largely to the contracts that govern its procurement and use. Most of those agreements still promise, in one form or another, that the system will behave as specified. Frontier models are making that promise increasingly difficult to keep. Commercial drafting is now scrambling to articulate a distinction it never previously needed to draw: the distinction between what the AI system was told to do and what it chose to do. Contracts must regulate that gap between expected and emergent behaviour, and confront the uncomfortable question of liability when an agent acts on its own judgment, producing consequences no one predicted.
Although an AI Bill dealing with some of the most pressing concerns in this area isn't likely to be in the next King's Speech, UK legislators will need to keep a keen eye on developments in agentic AI. The work of the DRCF may not be enough.
If you need advice about using AI in your operations, please contact a member of the team.
