Pornography

Online AI-generated child sexual abuse material increased in 2025

AI prompt image generation

A new report from the Internet Watch Foundation (IWF) shows that artificial intelligence generated child sexual abuse material (CSAM) reached a record high last year with over 8,000 cases identified. The research also found that the majority of AI-generated material was category A, the most severe material under UK law.

Report pub­lished

The report from the IWF entitled ‘Harm without limits: AI child sexual abuse material through the eyes of our Analysts’ published today details AI CSAM statistics for 2025. The analysts assessed 8,029 AI-generated images and videos as showing realistic child sexual abuse. Their data shows a 260-fold increase in videos of AI-generated child sexual abuse. This material was available both on the so-called ‘dark web’ and on commercial platforms on the ‘clear’ web.

The IWF found that realistic full-motion AI video content is now commonplace, and that 65% of the video content is classed as category A, representing the most extreme illegal content. Their report indicates that AI-generated CSAM “fuels sexual interest in children, normalises extreme violence, and increases the risk of contact offending.”

Safety by design

The Internet Watch Foundation are a UK-based charity with a global remit to hunt down and remove child sexual abuse. Part of their work involves monitoring online discussion between paedophiles on the dark web. One analyst said: “It is very apparent from the unsettling dark web conversations observed by the IWF Hotline that AI innovations are regarded with delight by users of child sexual abuse material. Every new development in generative AI is extolled for its ability to enhance the realism, to heighten the severity, or make more immersive, any conceivable sexual scenario with a child. This could be through adding audio to video, being able to depict multiple people interacting or even being able to successfully manipulate imagery of a real child known to an offender.”

The IWF are calling for an AI Bill that requires artificial intelligence platforms to implement safety-by-design as standard. This would mean testing for AI systems before they are released to ensure they cannot be used to generate CSAM, putting in place content moderation policies, and use of trusted datasets to block CSAM training data from AI models.

A new poll conducted by Savanta shows that 82% of the UK population want the government to introduce this kind of legislation to regular AI platforms.

Share