British Tech Firms and Child Safety Agencies to Test AI's Ability to Create Abuse Images
Tech firms and child protection organizations will receive permission to assess whether AI systems can produce child abuse material under new UK legislation.
Significant Increase in AI-Generated Illegal Material
The declaration came as findings from a protection watchdog showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.
New Regulatory Structure
Under the amendments, the authorities will allow approved AI developers and child safety groups to inspect AI systems – the underlying systems for conversational AI and image generators – and verify they have adequate safeguards to prevent them from producing depictions of child sexual abuse.
"Fundamentally about stopping exploitation before it happens," declared the minister for AI and online safety, adding: "Specialists, under rigorous protocols, can now identify the danger in AI systems early."
Tackling Regulatory Challenges
The amendments have been implemented because it is against the law to produce and possess CSAM, meaning that AI developers and others cannot generate such images as part of a testing process. Until now, authorities had to delay action until AI-generated CSAM was published online before dealing with it.
This legislation is aimed at averting that problem by enabling to halt the production of those images at source.
Legislative Structure
The amendments are being added by the government as revisions to the crime and policing bill, which is also implementing a prohibition on possessing, producing or sharing AI models designed to create exploitative content.
Practical Consequences
This recently, the minister visited the London headquarters of a children's helpline and heard a mock-up call to advisors featuring a report of AI-based abuse. The interaction depicted a adolescent requesting help after being blackmailed using a explicit AI-generated image of himself, created using AI.
"When I learn about children experiencing extortion online, it is a source of extreme frustration in me and justified concern amongst families," he stated.
Concerning Statistics
A leading online safety organization stated that cases of AI-generated abuse content – such as webpages that may contain numerous files – had significantly increased so far this year.
Instances of the most severe content – the most serious form of abuse – increased from 2,621 images or videos to 3,086.
- Girls were overwhelmingly victimized, accounting for 94% of illegal AI depictions in 2025
- Depictions of newborns to two-year-olds increased from five in 2024 to 92 in 2025
Sector Reaction
The law change could "represent a crucial step to ensure AI products are secure before they are launched," stated the head of the internet monitoring organization.
"Artificial intelligence systems have made it so survivors can be victimised all over again with just a simple actions, giving criminals the ability to make possibly endless quantities of sophisticated, photorealistic exploitative content," she added. "Material which further exploits victims' trauma, and makes children, particularly girls, more vulnerable on and off line."
Counseling Session Data
The children's helpline also published information of counselling sessions where AI has been referenced. AI-related risks discussed in the conversations include:
- Using AI to evaluate body size, physique and appearance
- Chatbots discouraging children from talking to trusted guardians about abuse
- Facing harassment online with AI-generated content
- Online blackmail using AI-manipulated images
During April and September this year, Childline delivered 367 counselling sessions where AI, chatbots and associated terms were discussed, significantly more as many as in the equivalent timeframe last year.
Half of the mentions of AI in the 2025 sessions were related to mental health and wellbeing, including utilizing chatbots for support and AI therapeutic apps.