There has been a whirlwind of online safety developments in recent weeks. 

In early 2025, the UK government announced it would introduce new laws to protect against the creation of sexually explicit AI content in the then Data Use and Access Bill. The government had also proposed provisions to ban the creation of intimate images of adults in the Crime and Policing Bill. The latter continues to pass through the parliamentary process. The proposed rules place criminal responsibility on individuals, but they were not originally going to make the AI models that create this material illegal.

As of February 6, 2026, creating or requesting the creation of intimate images of an adult without their consent is a criminal offence in the UK. This significant legal development is the result of new provisions in the Data (Use and Access) Act 2025 coming into force.

The changes target the creation of so-called "deepfake" intimate images without consent or a reasonable belief in consent.

In addition, the time limit for prosecutions has been extended. A case can now be brought within six months of the prosecutor having sufficient evidence, and up to three years after the offence was committed, giving victims more time to seek justice.

These measures close a significant loophole. Although English law already criminalised the sharing of non-consensual intimate images, it did not cover their initial creation, whether through AI or other means. This gap has now been filled.

This new law aligns with a broader regulatory concern about, and a crackdown on, the misuse of AI tools. It follows Ofcom's launch of an investigation under the Online Safety Act 2023 (OSA) and by the ICO, and a similar probe launched by the European Commission under the Digital Services Act, signalling a growing international consensus to combat this form of digital abuse. Ofcom has clarified that images and videos that are created by a chatbot without it searching the internet are not generally in scope of the OSA. They will only be in scope if they are pornographic – in which case they need to be age-gated – or can be shared with other users of the chatbot.

In May last year, the Children's Commissioner for England issued a report calling for AI tools that allow nude images of children and teenagers to be created to be banned. Following concerns expressed by two Select Committees, the UK government announced that it would ban such apps. 

As of 16 February 2026, following the increasing pressure on the UK government regarding online safety and the misuse of AI tools, it announced that it will move quickly to close the identified legal loophole by bringing AI chatbot providers within scope of illegal content duties under the OSA. This announcement follows Ofcom's investigation as previously mentioned, regarding an AI chatbot, used to create sexualised imagery and child sexual abuse material. 

Currently, the OSA regulates "user-to-user services" (such as social media platforms) and search services, but standalone AI chatbots do not fall neatly within this definition and are not currently in scope of the Act. Ofcom was able to open its investigation into the platform because the AI tool is integrated into that platform. However, if an AI chatbot operates independently of a regulated social media service, it may fall outside Ofcom's enforcement powers entirely. 

To address this gap, the government is tabling an amendment to the Crime and Policing Bill to require AI chatbot providers not currently in scope of the OSA to protect their users from illegal content—or face the consequences of breaking the law. The government also intends to table an amendment to the Crime and Policing Bill to give effect to measures around preservation of child social media data. 

Additionally, new powers in the Children's Wellbeing and Schools Bill (which is also actively passing through Parliament) will enable the government to act at speed to introduce targeted actions following the children's digital wellbeing consultation. 

The children's digital wellbeing consultation was announced by the UK government on 19 January 2026, with the aim of examining the most effective ways to ensure children have "healthy online experiences" including the consideration of a social media minimum access age. More detail is expected to follow in March, with the consultation response due in summer 2026.

The consultation is seeking views on measures including: a minimum age for children to access social media (we've similarly seen the introduction of a ban for under 16s in Australia and similar proposals announced across the EU); improving age verification and enforcing age limits; raising the current digital age of consent from 13; removing or limiting design features that drive addictive or compulsive use (such as 'infinite scrolling' and 'streaks'); and giving more support to parents, such as further guidance or parental controls. 

Ofcom call for evidence

As well as this, Ofcom has called for evidence to inform its first statutory report on content that is harmful to children. As part of its duties under the Online Safety Act, Ofcom must produce a report on content that is harmful to children at least every three years.

The purpose of this report is to understand UK children's online experiences, including how often they encounter harmful content and the severity of any harm caused as a result. The report must also include advice to the Secretary of State about whether Ofcom recommends any changes to the kinds of content harmful to children that are currently specified in the Act. 

Specifically, Ofcom is seeking evidence on the following:

  • the incidence of content that is harmful to children on regulated services;
  • the severity of harm that children in the UK suffer, or may suffer, as a result; and
  • evidence that suggests that it may be appropriate to make changes to specific kinds of harmful content (referred to as primary priority content and priority content) covered in the Act.

The call for evidence ends on 10 March 2026. It will publish its final report and advice to the Secretary of State by 26 October 2026.

There's a lot going on, and it's important that businesses which may fall within scope of the OSA have done their necessary risk assessments and are following Ofcom's guidance.  If you need help, please contact the team.

Online safety reforms to be fast-tracked amid rising AI risks

Authors