British Tech Firms and Child Protection Officials to Test AI's Ability to Create Abuse Content

Technology companies and child protection organizations will be granted authority to assess whether artificial intelligence tools can generate child exploitation images under recently introduced British laws.

Significant Increase in AI-Generated Harmful Content

The announcement coincided with findings from a protection monitoring body showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.

Updated Legal Framework

Under the changes, the authorities will permit approved AI developers and child safety groups to examine AI models – the foundational technology for conversational AI and image generators – and verify they have sufficient safeguards to stop them from creating images of child sexual abuse.

"Fundamentally about preventing exploitation before it happens," stated the minister for AI and online safety, noting: "Specialists, under rigorous protocols, can now identify the risk in AI systems promptly."

Tackling Legal Challenges

The changes have been introduced because it is against the law to create and own CSAM, meaning that AI creators and others cannot generate such content as part of a testing process. Until now, officials had to delay action until AI-generated CSAM was published online before dealing with it.

This law is aimed at preventing that issue by enabling to halt the production of those materials at source.

Legal Framework

The changes are being added by the government as revisions to the criminal justice legislation, which is also establishing a prohibition on owning, producing or sharing AI systems developed to generate child sexual abuse material.

Practical Impact

This week, the minister toured the London headquarters of a children's helpline and heard a simulated call to advisors featuring a account of AI-based exploitation. The interaction portrayed a teenager seeking help after being blackmailed using a sexualised deepfake of themselves, constructed using AI.

"When I hear about young people experiencing extortion online, it is a cause of extreme frustration in me and rightful concern amongst parents," he said.

Concerning Statistics

A leading online safety organization stated that cases of AI-generated abuse material – such as online pages that may include numerous images – had more than doubled so far this year.

Instances of category A content – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.

  • Female children were predominantly victimized, accounting for 94% of illegal AI images in 2025
  • Portrayals of newborns to toddlers rose from five in 2024 to 92 in 2025

Industry Reaction

The legislative amendment could "constitute a crucial step to ensure AI products are safe before they are launched," commented the head of the online safety organization.

"AI tools have made it so survivors can be targeted repeatedly with just a simple actions, giving criminals the capability to create potentially limitless amounts of advanced, photorealistic child sexual abuse material," she added. "Content which further exploits victims' trauma, and makes young people, particularly female children, more vulnerable both online and offline."

Support Interaction Data

The children's helpline also released information of counselling sessions where AI has been mentioned. AI-related risks mentioned in the sessions include:

  • Using AI to evaluate weight, physique and looks
  • AI assistants dissuading children from talking to trusted guardians about harm
  • Facing harassment online with AI-generated material
  • Online blackmail using AI-faked images

During April and September this year, Childline conducted 367 support interactions where AI, chatbots and associated terms were mentioned, significantly more as many as in the equivalent timeframe last year.

Fifty percent of the references of AI in the 2025 sessions were related to psychological wellbeing and wellbeing, including utilizing AI assistants for support and AI therapeutic apps.

Benjamin Floyd
Benjamin Floyd

A passionate DIY enthusiast and home renovation expert with over a decade of experience in sustainable building practices.