The Competition and Markets Authority (CMA) has published guidance and research about how businesses can use agentic AI whilst complying with consumer protection law. It's crucial that businesses take note of the guidance, because otherwise the CMA will use its new enforcement powers with fines of up to 10% of annual global turnover.

What is Agentic AI?

Agentic AI tools can achieve goals autonomously, planning, coordinating and taking actions across multiple services. Unlike traditional AI tools which merely assist decisions, AI agents sense their environment, decide and act. This goes beyond generating responses to queries. An AI agent may assess goals, break them into subtasks, plan workflows, retrieve real-time data, execute actions autonomously (such as making payments), and store memory of past interactions to improve over time. 

Businesses are already deploying AI agents, eg in customer operations and service, commerce and sales workflows, software operations and internal business process automation. 

The CMA emphasises that consumer law requires businesses to treat customers fairly, and it does not matter whether customers interact with a person or an AI agent. A business is responsible for what an AI agent does in the same way as it would be for what an employee does, including where a third party has designed or provides the AI agent.

The CMA's guidance sets out four practical steps for businesses deploying agentic AI.

Tell customers if you use an AI agent

Businesses should be transparent about how they use AI agents as a means of building trust, particularly where the use of AI might surprise customers. Under consumer law, consumers must have the information they need to make informed decisions and must not be misled. If the fact that customers are dealing with AI rather than a person might affect their decisions, businesses should tell them. Businesses must not overstate the role of AI or what the system can or cannot do. 

Train AI agents to comply with consumer law

Businesses should consider what the AI agent will be set up to do and how that might affect customers. AI agents should be prompted to respect customers' statutory rights and contractual terms, avoid misleading customers, and properly obtain any necessary consents. Testing is a crucial part of training, including evaluating performance. 

Monitor how AI agents are performing

Businesses should regularly check that AI agents are delivering correct results, behaving as intended, and complying with consumer law. Regular human oversight is essential to catch mistakes and ensure legal compliance. 

Refine the AI agent quickly if there is a problem

If an AI agent is not performing as expected and this is leading to non-compliant outcomes, businesses must act quickly to address the problem, for example by refining prompts or workflows. This is particularly important where AI agents interact with large numbers of people or with vulnerable customers. 

Examples of use cases

The CMA provides specific guidance across several common use cases.

  • For marketing campaigns, it says that AI agents must provide accurate price information including all unavoidable charges, properly label paid endorsements, and ensure offers or price reductions are genuine. Someone with appropriate experience should regularly review AI-generated marketing materials.
  • For processing refund requests, AI agents must be designed with consumers' statutory rights under the Consumer Rights Act 2015 and the Consumer Contracts (Information, Cancellation and Additional Charges) Regulations 2013 in mind, as well as any contractual terms such as extended returns periods. Exchanges with consumers should be regularly reviewed to ensure decisions properly reflect the nature of refund requests. 
  • For customer service queries, AI agents must respond accurately to queries about prices, products and rights, provide consumers with all information needed for informed decisions, and not make it difficult for consumers to exercise their rights. Complaints and customer feedback should be regularly reviewed. 
  • For comparison services, results must be accurate and important information must be clearly disclosed, including market coverage, data searched, any limitations, how results are ranked, and any links with suppliers. 

Key risks identified in the research

The CMA's research highlights several significant risks that businesses need to be aware of. These include the potential for AI agents to deploy "dark patterns" (design strategies which manipulate consumers into taking decisions on the platform), concerns about their reliability, and the risk of biased or discriminatory outcomes, The CMA also stresses that such systems should have well defined boundaries, clear prompts, and effective override mechanisms. In addition, they should be built to support interoperability and data mobility to avoid locking consumers into particular services.

Separately, the CMA has further warned about the risk of agentic collusion, where multiple businesses use autonomous systems that independently optimise pricing or commercial decisions. Interactions between these systems could unintentionally reduce competitive pressure.

Under the UK's consumer protection regime, businesses are prohibited from misleading, manipulating, or placing undue pressure on consumers—regardless of whether the outcome is caused by human actions, algorithmic processes, or the design of digital interfaces. The CMA has made clear that if an AI agent steers, pushes, or misleads consumers in a way that harms their economic interests, this is likely to breach the law. The principles the CMA published for foundation models in 2023 remain highly relevant to agentic AI, particularly those concerning transparency and accountability.

If you need advice about using AI in your customer-facing operations, please contact a member of the team. 
 

Agentic AI and consumer law: the CMA's guidance for businesses

Authors