UK Tech Companies and Child Safety Agencies to Test AI's Capability to Create Exploitation Content
Technology companies and child safety organizations will be granted authority to evaluate whether artificial intelligence systems can generate child abuse material under recently introduced British legislation.
Significant Increase in AI-Generated Illegal Content
The declaration coincided with revelations from a protection monitoring body showing that cases of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.
Updated Regulatory Framework
Under the changes, the government will allow designated AI companies and child safety organizations to examine AI models – the underlying technology for chatbots and image generators – and verify they have adequate safeguards to stop them from creating images of child sexual abuse.
"Ultimately about preventing abuse before it happens," declared Kanishka Narayan, noting: "Experts, under strict conditions, can now detect the danger in AI systems promptly."
Addressing Legal Obstacles
The changes have been introduced because it is illegal to create and own CSAM, meaning that AI creators and other parties cannot generate such images as part of a evaluation process. Previously, authorities had to delay action until AI-generated CSAM was uploaded online before addressing it.
This law is designed to preventing that problem by helping to halt the creation of those materials at their origin.
Legal Structure
The changes are being added by the government as modifications to the crime and policing bill, which is also establishing a prohibition on possessing, creating or sharing AI models designed to create child sexual abuse material.
Practical Consequences
This week, the minister visited the London base of a children's helpline and heard a simulated conversation to advisors involving a account of AI-based abuse. The call depicted a teenager seeking help after being blackmailed using a explicit AI-generated image of himself, constructed using AI.
"When I hear about young people facing extortion online, it is a source of intense frustration in me and rightful anger amongst families," he stated.
Alarming Statistics
A leading internet monitoring foundation stated that cases of AI-generated abuse content – such as webpages that may include numerous images – had significantly increased so far this year.
Cases of category A material – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.
- Female children were overwhelmingly victimized, making up 94% of prohibited AI depictions in 2025
- Depictions of newborns to toddlers rose from five in 2024 to 92 in 2025
Sector Response
The legislative amendment could "constitute a vital step to ensure AI tools are secure before they are released," commented the head of the online safety organization.
"Artificial intelligence systems have made it so victims can be victimised repeatedly with just a simple actions, providing offenders the ability to make possibly endless amounts of sophisticated, photorealistic exploitative content," she added. "Content which further exploits survivors' suffering, and makes children, especially female children, less safe on and off line."
Support Interaction Information
Childline also released information of support interactions where AI has been referenced. AI-related risks mentioned in the conversations include:
- Using AI to rate weight, physique and appearance
- AI assistants discouraging young people from consulting trusted adults about abuse
- Being bullied online with AI-generated material
- Online extortion using AI-faked images
Between April and September this year, Childline conducted 367 support sessions where AI, conversational AI and related topics were discussed, four times as many as in the equivalent timeframe last year.
Fifty percent of the mentions of AI in the 2025 sessions were connected with psychological wellbeing and wellness, encompassing using chatbots for assistance and AI therapy apps.