UK Tech Companies and Child Protection Agencies to Examine AI's Ability to Create Abuse Images

Technology companies and child safety organizations will receive permission to evaluate whether artificial intelligence systems can generate child exploitation material under recently introduced British laws.

Substantial Increase in AI-Generated Illegal Content

The announcement came as findings from a safety monitoring body showing that reports of AI-generated CSAM have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.

Updated Regulatory Framework

Under the changes, the government will permit approved AI developers and child protection groups to inspect AI systems – the foundational systems for conversational AI and visual AI tools – and ensure they have adequate safeguards to prevent them from creating depictions of child exploitation.

"Fundamentally about stopping exploitation before it occurs," stated the minister for AI and online safety, adding: "Experts, under strict protocols, can now detect the danger in AI systems promptly."

Addressing Legal Challenges

The changes have been introduced because it is against the law to produce and own CSAM, meaning that AI creators and others cannot generate such content as part of a evaluation regime. Previously, authorities had to wait until AI-generated CSAM was uploaded online before dealing with it.

This law is designed to averting that problem by helping to halt the production of those images at source.

Legislative Framework

The changes are being introduced by the authorities as modifications to the crime and policing bill, which is also implementing a ban on possessing, creating or distributing AI systems designed to generate child sexual abuse material.

Real-World Consequences

This week, the minister toured the London headquarters of Childline and heard a mock-up conversation to advisors involving a report of AI-based exploitation. The call portrayed a teenager requesting help after being blackmailed using a explicit deepfake of himself, constructed using AI.

"When I hear about young people facing blackmail online, it is a cause of intense anger in me and justified concern amongst families," he said.

Concerning Data

A prominent online safety organization reported that instances of AI-generated exploitation content – such as online pages that may include multiple files – had significantly increased so far this year.

Cases of category A material – the most serious form of exploitation – rose from 2,621 visual files to 3,086.

  • Female children were overwhelmingly victimized, accounting for 94% of prohibited AI images in 2025
  • Depictions of infants to toddlers increased from five in 2024 to 92 in 2025

Sector Response

The law change could "constitute a vital step to ensure AI tools are safe before they are released," stated the chief executive of the online safety foundation.

"AI tools have made it so survivors can be targeted all over again with just a simple actions, giving criminals the ability to create potentially limitless amounts of advanced, photorealistic child sexual abuse material," she added. "Material which further commodifies survivors' trauma, and renders children, especially girls, more vulnerable both online and offline."

Counseling Session Data

The children's helpline also published details of support sessions where AI has been referenced. AI-related harms mentioned in the sessions comprise:

  • Using AI to rate weight, physique and looks
  • AI assistants discouraging children from talking to safe adults about abuse
  • Being bullied online with AI-generated content
  • Digital blackmail using AI-manipulated images

During April and September this year, the helpline conducted 367 support interactions where AI, chatbots and related topics were mentioned, significantly more as many as in the equivalent timeframe last year.

Half of the mentions of AI in the 2025 interactions were connected with mental health and wellbeing, including using chatbots for assistance and AI therapeutic apps.

Dana Carson
Dana Carson

Elara is a passionate writer and explorer who shares her journeys and insights on connecting with the natural world.