UK Tech Companies and Child Protection Agencies to Test AI's Ability to Create Abuse Images

Tech firms and child protection agencies will receive permission to assess whether artificial intelligence tools can generate child abuse images under recently introduced British legislation.

Substantial Increase in AI-Generated Illegal Content

The announcement coincided with revelations from a protection monitoring body showing that cases of AI-generated CSAM have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.

Updated Regulatory Framework

Under the changes, the authorities will allow designated AI developers and child safety groups to examine AI systems – the foundational systems for chatbots and image generators – and verify they have adequate protective measures to stop them from producing images of child exploitation.

"Fundamentally about preventing exploitation before it happens," declared Kanishka Narayan, noting: "Specialists, under rigorous conditions, can now detect the danger in AI models early."

Tackling Legal Obstacles

The changes have been introduced because it is against the law to produce and own CSAM, meaning that AI developers and others cannot generate such images as part of a evaluation regime. Until now, officials had to wait until AI-generated CSAM was uploaded online before addressing it.

This legislation is aimed at averting that problem by helping to halt the creation of those materials at source.

Legal Framework

The changes are being added by the authorities as modifications to the criminal justice legislation, which is also implementing a prohibition on possessing, creating or distributing AI systems developed to create exploitative content.

Real-World Consequences

This week, the official toured the London headquarters of a children's helpline and listened to a mock-up conversation to counsellors involving a account of AI-based exploitation. The interaction portrayed a adolescent seeking help after being blackmailed using a sexualised deepfake of themselves, constructed using AI.

"When I hear about children experiencing blackmail online, it is a cause of extreme frustration in me and justified concern amongst families," he stated.

Alarming Statistics

A leading internet monitoring foundation reported that cases of AI-generated exploitation material – such as online pages that may include numerous files – had more than doubled so far this year.

Instances of category A material – the most serious form of abuse – increased from 2,621 images or videos to 3,086.

  • Female children were overwhelmingly victimized, accounting for 94% of illegal AI images in 2025
  • Depictions of infants to two-year-olds rose from five in 2024 to 92 in 2025

Sector Reaction

The law change could "represent a vital step to guarantee AI tools are safe before they are released," stated the chief executive of the online safety foundation.

"AI tools have made it so survivors can be victimised repeatedly with just a simple actions, providing criminals the ability to create possibly limitless quantities of advanced, lifelike child sexual abuse material," she added. "Material which additionally commodifies victims' suffering, and makes young people, especially female children, less safe both online and offline."

Counseling Interaction Information

Childline also published details of counselling interactions where AI has been mentioned. AI-related risks mentioned in the conversations comprise:

  • Employing AI to rate body size, physique and appearance
  • AI assistants discouraging children from talking to safe guardians about harm
  • Being bullied online with AI-generated material
  • Digital blackmail using AI-faked images

During April and September this year, Childline delivered 367 counselling interactions where AI, chatbots and related topics were mentioned, significantly more as many as in the same period last year.

Fifty percent of the mentions of AI in the 2025 sessions were connected with mental health and wellbeing, encompassing utilizing AI assistants for support and AI therapy apps.

Rebecca Peters
Rebecca Peters

Tech enthusiast and writer with a passion for exploring how emerging technologies shape our future.