British Technology Firms and Child Safety Officials to Test AI's Capability to Generate Exploitation Content
Tech firms and child safety organizations will receive permission to evaluate whether AI tools can generate child exploitation material under recently introduced British legislation.
Significant Rise in AI-Generated Illegal Material
The announcement coincided with findings from a protection watchdog showing that cases of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.
New Legal Structure
Under the changes, the government will allow designated AI developers and child protection groups to examine AI systems – the foundational systems for conversational AI and image generators – and verify they have adequate safeguards to stop them from creating images of child sexual abuse.
"Fundamentally about stopping abuse before it occurs," stated Kanishka Narayan, adding: "Specialists, under rigorous protocols, can now detect the danger in AI models promptly."
Addressing Legal Challenges
The amendments have been introduced because it is against the law to produce and possess CSAM, meaning that AI creators and others cannot create such images as part of a evaluation process. Previously, authorities had to delay action until AI-generated CSAM was uploaded online before dealing with it.
This legislation is aimed at preventing that issue by helping to stop the production of those images at their origin.
Legal Framework
The changes are being added by the government as revisions to the criminal justice legislation, which is also establishing a prohibition on possessing, producing or sharing AI systems designed to generate child sexual abuse material.
Real-World Consequences
This recently, the minister toured the London headquarters of Childline and heard a simulated call to advisors featuring a report of AI-based abuse. The call depicted a teenager requesting help after facing extortion using a sexualised AI-generated image of themselves, constructed using AI.
"When I learn about young people facing blackmail online, it is a cause of intense anger in me and justified anger amongst parents," he said.
Alarming Data
A leading internet monitoring foundation stated that instances of AI-generated abuse material – such as online pages that may include multiple files – had significantly increased so far this year.
Cases of category A material – the gravest form of abuse – rose from 2,621 visual files to 3,086.
- Female children were predominantly targeted, accounting for 94% of prohibited AI images in 2025
- Depictions of newborns to toddlers increased from five in 2024 to 92 in 2025
Sector Reaction
The legislative amendment could "represent a vital step to ensure AI tools are secure before they are launched," stated the chief executive of the online safety organization.
"Artificial intelligence systems have enabled so victims can be targeted all over again with just a simple actions, providing offenders the ability to create possibly limitless amounts of advanced, photorealistic child sexual abuse material," she added. "Material which further exploits victims' suffering, and makes children, especially female children, more vulnerable both online and offline."
Support Session Data
The children's helpline also released details of counselling sessions where AI has been mentioned. AI-related harms discussed in the sessions include:
- Using AI to rate body size, body and appearance
- AI assistants dissuading young people from talking to trusted adults about abuse
- Being bullied online with AI-generated content
- Digital blackmail using AI-faked images
Between April and September this year, the helpline conducted 367 support interactions where AI, chatbots and associated topics were mentioned, significantly more as many as in the same period last year.
Half of the mentions of AI in the 2025 interactions were related to mental health and wellbeing, encompassing utilizing chatbots for support and AI therapy applications.