The number of artificial intelligence-generated child sexual abuse videos online has surged dramatically, according to a new report by the UK-based Internet Watch Foundation, raising urgent concerns over the misuse of generative AI tools.
The IWF revealed it had identified 1,286 illegal AI-generated videos depicting child sexual abuse in the first half of 2025—up from just two during the same period last year.
Over 1,000 of these videos were classified as Category A, the most severe form of abuse under UK law.
The watchdog said the material is becoming increasingly difficult to distinguish from real footage, with recent advances in generative video technology “crossing the threshold” of realism.
It warned that rapid and unregulated deployment of powerful AI tools—fueled by billions of dollars in tech investment—is enabling paedophiles to create and distribute hyper-realistic child sexual abuse material at an alarming scale.
The IWF is now urging tech companies and regulators to take immediate action to prevent further exploitation.
“It is a very competitive industry. Lots of money is going into it, so unfortunately there is a lot of choice for perpetrators,” said one IWF analyst.
The IWF uncovered a 400% increase in web pages containing AI-generated child sexual abuse content in the first half of 2025.
It found 210 such URLs, compared to just 42 in the same period last year.
Many of these sites hosted hundreds of images, with a notable rise in video content.
On a dark web forum, one offender remarked on how quickly AI tools are advancing, saying they had only just mastered one before “something new and better” appeared.
IWF analysts said offenders are taking freely available AI models and “fine-tuning” them with child sexual abuse material to generate highly realistic videos. In some instances, just a few CSAM videos were enough to train the models.
According to the watchdog, the most lifelike AI-generated abuse videos seen this year were based on identifiable real-world victims.

