UK Technology Companies and Child Safety Officials to Test AI's Capability to Generate Abuse Images
Technology companies and child safety agencies will be granted authority to evaluate whether AI tools can generate child abuse images under new UK legislation.
Significant Increase in AI-Generated Harmful Content
The announcement came as revelations from a safety monitoring body showing that cases of AI-generated CSAM have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.
New Legal Framework
Under the amendments, the government will allow approved AI developers and child protection organizations to inspect AI systems – the foundational systems for chatbots and visual AI tools – and ensure they have sufficient safeguards to prevent them from producing depictions of child exploitation.
"Ultimately about preventing exploitation before it occurs," stated Kanishka Narayan, adding: "Specialists, under strict protocols, can now identify the risk in AI systems promptly."
Tackling Legal Challenges
The amendments have been implemented because it is illegal to create and possess CSAM, meaning that AI creators and others cannot create such content as part of a evaluation regime. Previously, authorities had to wait until AI-generated CSAM was uploaded online before dealing with it.
This legislation is designed to preventing that issue by enabling to stop the creation of those materials at their origin.
Legislative Framework
The amendments are being introduced by the authorities as revisions to the criminal justice legislation, which is also establishing a prohibition on owning, producing or sharing AI models developed to generate exploitative content.
Real-World Consequences
This week, the minister toured the London base of a children's helpline and heard a mock-up call to counsellors involving a account of AI-based abuse. The call portrayed a adolescent seeking help after being blackmailed using a explicit deepfake of himself, created using AI.
"When I learn about young people facing extortion online, it is a cause of extreme anger in me and justified concern amongst parents," he stated.
Alarming Statistics
A leading internet monitoring organization stated that instances of AI-generated exploitation material – such as webpages that may contain multiple images – had significantly increased so far this year.
Cases of the most severe content – the most serious form of exploitation – rose from 2,621 images or videos to 3,086.
- Girls were predominantly victimized, accounting for 94% of illegal AI depictions in 2025
- Depictions of newborns to toddlers rose from five in 2024 to 92 in 2025
Sector Reaction
The legislative amendment could "constitute a crucial step to guarantee AI tools are secure before they are launched," stated the chief executive of the internet monitoring foundation.
"Artificial intelligence systems have enabled so victims can be victimised all over again with just a few clicks, giving offenders the capability to create possibly endless quantities of advanced, lifelike child sexual abuse material," she added. "Material which additionally exploits victims' trauma, and makes children, particularly female children, less safe both online and offline."
Support Interaction Information
Childline also released details of support sessions where AI has been mentioned. AI-related harms discussed in the conversations include:
- Employing AI to rate body size, physique and appearance
- AI assistants dissuading young people from consulting trusted adults about harm
- Being bullied online with AI-generated material
- Digital extortion using AI-manipulated pictures
During April and September this year, Childline conducted 367 counselling sessions where AI, conversational AI and related topics were discussed, significantly more as many as in the same period last year.
Half of the references of AI in the 2025 interactions were related to psychological wellbeing and wellness, encompassing utilizing chatbots for support and AI therapy applications.