AI-generated child sexual abuse content is increasingly being found on publicly accessible areas of the internet, exposing it to more people, an internet watchdog has warned.
The Internet Watch Foundation (IWF), which finds and removes child sexual abuse content from the internet, said in the past six months alone it had seen more reports of AI-generated abuse content than in the 12 months prior to that.
And rather than being hidden in forums on the dark web, the IWF said 99% of this content was found on publicly accessible areas of the internet, with the watchdog warning of the distressing nature of encountering such images.
In its data, it revealed that 78% of the reports it received came from members of the public who had stumbled across the imagery on sites such as forums or AI galleries.
It said many of the AI-generated images and videos of children being hurt or abused are so realistic that they can be difficult to tell apart from imagery of real children, and are regarded as criminal content under UK law.
According to the IWF’s figures, more than half of the AI-generated content found in the past six months was hosted on servers in two countries – Russia and the United States.
Derek Ray-Hill, interim chief executive of the IWF, said: “People can be under no illusion that AI-generated child sexual abuse material causes horrific harm, not only to those who might see it but to those survivors who are repeatedly victimised every time images and videos of their abuse are mercilessly exploited for the twisted enjoyment of predators online.
“To create the level of sophistication seen in the AI imagery, the software used has also had to be trained on existing sexual abuse images and videos of real child victims shared and distributed on the internet.
“The protection of children and the prevention of AI abuse imagery must be prioritised by legislators and the tech industry above any thought of profit.
“Recent months show that this problem is not going away and is in fact getting worse.
“We urgently need to bring laws up to speed for the digital age, and see tangible measures being put in place that address potential risks.”
Many campaigners have called for strict regulation to be put in place around the training and development of AI models, to ensure they do not generate harmful or dangerous content, and for AI platforms to refuse to fulfil any requests or queries which could result in such material being created – a system some AI platforms already have in place.
Assistant chief constable Becky Riggs, child protection and abuse investigation lead at the National Police Chiefs’ Council, said: “The scale of online child sexual abuse and imagery is frightening, and we know that the increased use of artificial intelligence to generate abusive images poses a real-life threat to children.
“Law enforcement is committed to finding and prosecuting online child abusers, wherever they are.
“Policing continues to work proactively to pursue offenders, including through our specialist undercover units, who disrupt child abusers online every day, and this is no different for AI-generated imagery.
“While we will continue to relentlessly pursue these predators and safeguard victims, we must see action from tech companies to do more under the Online Safety Act to make their platforms safe places for children and young people.
“This includes and brings into sharp focus those companies responsible for the developing use of AI and the necessary safeguards required to prevent it being used at scale, as we are now seeing.
“We continue to work closely with the National Crime Agency, Government and industry to harness technology which will help us to fight online child sexual abuse and exploitation.”
Follow STV News on WhatsApp
Scan the QR code on your mobile device for all the latest news from around the country