Pornography

A ban on Deepfakes must be included in the Online Safety Act, charity says

Online safety charity, Internet Matters, is calling for AI-powered 'nudifying' apps, which create non-consensual explicit images, to be banned.

The group is campaigning for stronger legislation, including updates to the Online Safety Act, to tackle this growing issue, which affects both children and adults.

Internet Matters estimates that up to half a million children have encountered such images online, with over half of teenagers fearing the creation of deepfake nudes more than the sharing of real explicit images.

The charity highlighted that AI-generated sexual images of children are not currently illegal in the UK, although possessing such content is a criminal offence.

A recent study from the Internet Watch Foundation found that AI-generated child sexual abuse content is increasingly appearing on the open web.

Internet Matters noted that 99% of deepfake nudes involve women and girls, often facilitating child sexual abuse and sextortion.

Carolyn Bunting, co-CEO of Internet Matters, emphasised the devastating impact on victims, particularly girls, stating that this abuse "can happen to anybody, at any time."

Their study, surveying 2,000 parents and 1,000 children, revealed that boys are twice as likely to have encountered nude deepfakes, while girls are more often the victims. There was strong support for better education on deepfakes, with 92% of teenagers and 88% of parents advocating for lessons on the risks to take place in schools.

Jess Phillips, Minister for Safeguarding, welcomes the work of Internet Matters saying that it “has provided an important insight into how emerging technologies are being misused.”

She acknowledged the growing concern, and affirmed the government’s commitment to tackling AI-related abuse as part of its broader mission to combat violence against women and girls.

Laptop silhouette bedroom luke chesser unsplash

Share