UK Technology Companies and Child Protection Officials to Examine AI's Capability to Create Exploitation Images

Technology companies and child protection organizations will be granted authority to assess whether artificial intelligence tools can generate child abuse material under new British legislation.

Substantial Increase in AI-Generated Harmful Content

The announcement came as findings from a protection monitoring body showing that reports of AI-generated child sexual abuse material have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.

New Regulatory Structure

Under the changes, the government will allow approved AI developers and child safety groups to inspect AI models – the foundational technology for conversational AI and visual AI tools – and verify they have sufficient safeguards to stop them from producing depictions of child exploitation.

"Ultimately about preventing exploitation before it happens," stated Kanishka Narayan, noting: "Experts, under rigorous conditions, can now identify the danger in AI systems promptly."

Addressing Legal Obstacles

The changes have been implemented because it is against the law to produce and possess CSAM, meaning that AI creators and other parties cannot generate such content as part of a evaluation regime. Previously, officials had to wait until AI-generated CSAM was uploaded online before addressing it.

This law is designed to averting that issue by helping to stop the production of those materials at source.

Legislative Structure

The amendments are being added by the government as modifications to the crime and policing bill, which is also establishing a ban on possessing, producing or distributing AI models developed to create exploitative content.

Practical Impact

This week, the official visited the London headquarters of a children's helpline and heard a mock-up call to advisors featuring a report of AI-based abuse. The interaction portrayed a teenager seeking help after being blackmailed using a explicit AI-generated image of themselves, created using AI.

"When I learn about young people facing extortion online, it is a cause of intense frustration in me and justified concern amongst families," he said.

Concerning Data

A leading internet monitoring organization stated that instances of AI-generated abuse material – such as online pages that may contain numerous files – had significantly increased so far this year.

Cases of category A material – the most serious form of exploitation – increased from 2,621 visual files to 3,086.

  • Female children were overwhelmingly victimized, accounting for 94% of illegal AI depictions in 2025
  • Depictions of newborns to toddlers increased from five in 2024 to 92 in 2025

Sector Reaction

The legislative amendment could "represent a crucial step to guarantee AI products are secure before they are launched," commented the head of the internet monitoring organization.

"Artificial intelligence systems have enabled so victims can be targeted repeatedly with just a few clicks, giving criminals the ability to create potentially limitless quantities of sophisticated, lifelike exploitative content," she added. "Material which further commodifies victims' suffering, and makes children, especially female children, less safe on and off line."

Counseling Session Data

Childline also published information of support interactions where AI has been mentioned. AI-related risks mentioned in the conversations comprise:

  • Employing AI to rate body size, physique and looks
  • AI assistants discouraging children from consulting trusted adults about harm
  • Being bullied online with AI-generated material
  • Online blackmail using AI-manipulated pictures

During April and September this year, Childline conducted 367 counselling sessions where AI, chatbots and associated terms were mentioned, four times as many as in the same period last year.

Fifty percent of the references of AI in the 2025 sessions were related to psychological wellbeing and wellness, including utilizing chatbots for support and AI therapy apps.

Matthew Pena
Matthew Pena

Elara is a tech enthusiast and lifestyle writer with a passion for exploring how innovation shapes everyday experiences.