Under the Act, companies in scope must put in place systems and processes to improve user safety. Ofcom has been appointed to enforce the Act.
The focus of the Act is not on Ofcom moderating individual pieces of content, but on tech companies pro-actively assessing risks of harm to their users and putting in place systems and processes to keep them safer online.
The Act extends and applies to the whole of the UK, except for some of the criminal offences under the Act, which apply differently within the home nations.
Inside
- What companies are in scope?
- Does the Act affect my business?
- What does the legislation require?
- Criminal offences
- What happens if companies do not comply?
- What has Ofcom done so far?
- What are Ofcom’s priorities likely to be?
What companies are in scope?
The Act imposes legal requirements on:
- providers of user to user internet services which allow users to encounter content (such as messages, images, video, audio, comments) generated, uploaded or shared by other users;
- providers of search engines which enable users to search multiple websites and databases (a generative AI element, such as a large language model, could be considered to be part of a search service where it is integrated into the search engine); and
- providers of internet services which publish or display pornographic content (meaning pornographic content published by a provider).
The Act applies to services even if the companies providing them are outside the UK if they have links to the UK. This includes if the service has a significant number of UK users, if the UK is a target market, or it is capable of being accessed by UK users and there is a material risk of significant harm to such users.
Larger companies coming within scope are categorised either as “Category 1” services or “Category 2A or B” services where Category 1 services include the largest platforms with the most users and are subject to additional obligations. Thresholds for categorisation are set out in the Online Safety Act 2023 (Category 1, Category 2A and Category 2B Threshold Conditions) Regulations 2025 (SI 2025/226).
As well as the formal categorisation, providers’ safety duties are proportionate to factors including the risk of harm to individuals, and the size and capacity of each provider.
There are no exemptions for companies which fall within scope.
Does the Act affect my business?
Its scope goes well beyond the obvious ‘Big Tech’ social media platforms and search engines, and it is likely to encompass thousands of smaller platforms, including messaging services, websites, platforms and online forums where information can be shared, where advertising is served, or where users might interact with other users.
If you operate a video sharing platform, the temporary Video Sharing Platform (“VSP”) regime is expected to be replaced by the Act in July 2025 when the childrens’ duties come into force (see below).
What does the legislation require?
Platforms are required to:
- remove illegal content quickly or prevent it from appearing in the first place. This includes content under offences designated as priority offences in the Act; such as terrorism, public order offences or sexual exploitation;
- prevent children from accessing harmful and age-inappropriate content (such as, pornographic content, online abuse, cyberbullying or online harassment, or content which promotes or glorifies suicide, self-harm or eating disorders);
- enforce age limits and implement age-checking measures;
- ensure the risks and dangers posed to children on the largest social media platforms are more transparent, including by publishing risk assessments; and
- provide users with clear and accessible ways to report problems online when they do arise.
Harm is defined as physical or psychological harm and is aimed at harm to individuals rather than wider society.
Larger platforms must remove all illegal content, remove content that is banned by their own terms and conditions, and empower adult internet users with tools so that they can tailor the type of content they see and can avoid potentially harmful content if they do not want to see it on their feeds. Children must be automatically prevented from seeing this content without having to change any settings.
Category 1 services will also be required to prevent paid-for fraudulent adverts appearing on their services. They will also have a duty to protect journalistic content, news publisher content and content of democratic importance. These aspects are not yet in force as Ofcom has not yet published guidance on them.
Criminal offences
The criminal offences introduced by the Act came into effect on 31 January 2024. These offences cover:
- encouraging or assisting serious self-harm
- cyberflashing
- sending false information intended to cause non-trivial harm
- threatening communications
- intimate image abuse
- epilepsy trolling
These new offences apply directly to the individuals sending them, and convictions have already been made under the cyberflashing and threatening communications offences.
Criminal action can be taken against senior managers who fail to ensure companies follow information requests from Ofcom. Ofcom can also hold companies and senior managers (where they are at fault) criminally liable if the provider fails to comply with Ofcom’s enforcement notices in relation to specific child safety duties or to for child sexual abuse and exploitation on their service.
What happens if companies do not comply?
- require companies not meeting their obligations to put things right, impose fines of up to £18 million or 10% of global annual turnover (whichever is higher) or apply to court for business disruption measures (including blocking non-compliant services);
- has a range of information-gathering powers to support its oversight and enforcement activity;
- be able to make companies change their behaviour, by taking measures to improve compliance, including to use proactive technologies to identify illegal content and ensure children aren’t encountering harmful material; and
Ofcom is also publishing codes of practice, setting out the steps companies should take to comply with their new duties. Companies will either need to follow these steps or show that their approach is equally effective (the government says that it expects Ofcom to work collaboratively with companies to help them understand their new obligations and what steps they need to take to protect their users from harm).
Senior managers may be held criminally liable for a company's failure to comply with the legislation in specific circumstances. These include offences for failing to comply with information and audit notices, offences committed under the Act by the body corporate but with the consent, connivance or neglect of a company officer, and, in certain circumstances, offences for failing to comply with a children’s online safety duty.
What has Ofcom done so far?
Ofcom has carried out the following in relation to the Act.
Phase one: illegal harms
Ofcom published its illegal harms codes and guidance in December 2024 and online platforms had to complete risk assessments and put in place measures to combat risks by 16 March 2025. The illegal harms safety duties became enforceable from the same date. Ofcom has also published its enforcement guidance and record keeping and review guidance.
Phase two (child safety, pornography, and protection of women and girls). Ofcom published children's access assessment guidance on 16 January 2025 and online platforms had to complete their children's access assessments by 16 April 2025. Ofcom published children's safety codes and guidance in April 2025 and online platforms must complete children's risk assessments and implement measures to combat risks by 24 July 2025. Child protection safety duties will therefore become enforceable on 25 July 2025.
On 16 January 2025, Ofcom published its guidance for pornography providers on age assurance.
At the times of writing, Ofcom was consulting on best practice guidance on protecting women and girls online. The deadline for submissions for this consultation ended on 23 May.
Phase three (duties on categorised services).
Ofcom expects to publish the register of categorised services and issue draft transparency notices in Summer 2025. It will issue final transparency notices soon after. In early 2026, Ofcom expects to publish draft proposals regarding the additional duties on these services.
Ofcom is planning an additional consultation on how automated detection tools, including AI, can be used to mitigate the risk of illegal harms and content most harmful to children, including previously undetected child sexual abuse material and content encouraging suicide and self-harm.
The Act also requires Ofcom to establish an advisory committee on disinformation and misinformation to build cross-sector understanding of mis- and disinformation. It had its first meeting in April 2025.
Enforcement so far
In May and June 2025, Ofcom launched investigations relating to illegal harms and age assurance.
In April 2025, Ofcom launched its first investigation of illegal harms under the Act. It is investigating an online suicide discussion forum. It will assess if the provider has complied with its duties under the Act to adequately respond to a statutory information request; complete and keep a record of a suitable and sufficient illegal content risk assessment; and comply with the safety duties about illegal content, the duties relating to content reporting and duties about complaints procedures, which apply in relation to regulated user-to-user services.
In March, it issued its first information requests concerning illegal harms assessments, following publication of its guidance on its information gathering powers.
In January, Ofcom also opened an enforcement programme into age assurance measures that providers of pornographic content are implementing, having issued its guidance on age assurance.
What are Ofcom’s priorities likely to be?
In May 2025, the UK government laid its statement of strategic priorities on online safety before Parliament. Ofcom must have regard to the statement when exercising its regulatory functions on online safety matters. The priorities are:
- Safety by design: Embed safety by design to deliver safe online experiences for all users but especially children, tackle violence against women and girls, and work towards ensuring that there are no safe havens for illegal content and activity, including fraud, child sexual exploitation and abuse, and illegal disinformation.
- Transparency and accountability: Ensure industry transparency and accountability for delivering on online safety outcomes, driving increased trust in services and expanding the evidence-base to provide safer experiences for users.
- Agile regulation: Deliver an agile approach to regulation, ensuring the framework is robust in monitoring and tackling emerging harms – such as AI- generated content – and increases friction for technologies which enable online harm.
- Inclusivity and resilience: Create an inclusive, informed and vibrant digital society resilient to potential harms, including disinformation.
- Technology and innovation: Foster the innovation of online safety technologies to improve the safety of users and drive growth.
