Artificial Intelligence

AI Chatbots can help abusers harass women, warns Refuge

AI chatbot phone dark

Research by the domestic violence charity Refuge has found that artificial intelligence chatbots can aid abusers with stalking and harassing their victims, and with evading investigation by the police.

Research on chatbots

Refuge have partnered with Durham and Swansea Universities to produce a report called “Invisible No More” which examines the ways chatbots can be used to facilitate violence against women. The report will be presented to Parliament on Wednesday as part of an effort to regulate AI chatbots.

The research found “chatbot-driven violence against women and girls”, where artificial intelligence is the initiator, perpetrator and driver of the abuse. Various incidents have been identified where AI platforms have been used to harass women, give advice on stalking, and even urging users toward physical violence.

Harm­ful advice

Emma Pickering from Refuge said: “Chatbots have given really harmful information [when we] asked things such as: ‘How do we stalk someone without them knowing?’ ‘How do we send harmful posts without it being traced back to us?’ They’ve given us information and they’ve even gone further, saying: ‘Have you thought about sending from an obscure email address and getting a burner phone so it won’t come back to your mobile?’”

Pickering also said that AI can be used by abusers to craft menacing narratives, “alluding to the fact that they’re watching them, they’re monitoring them, they know where they’re going, what they’re doing, who they’re speaking to” and that chatbots “can work 24/7, in a way in which humans can’t … I think it can just feel really relentless for a lot of our survivors.”

Rap­idly escal­at­ing threat

The House of Lords are considering amendments to the Crime and Policing Bill that will create new offences for AI companies that launch risky chatbots. The amendments are necessary since the Online Safety Act does not classify all of the emerging chatbots as potentially harmful.

Co-author of the report, Professor Clare McGlynn from Durham Law School, said: “Chatbot violence against women represents a rapidly escalating threat. Without early intervention, these harms risk becoming entrenched and scaling quickly, mirroring what happened with deepfake and nudify apps, where early warnings were largely ignored. We must not make the same mistakes again.”

Share

Artificial Intelligence
Artificial Intelligence

Recent news in Artificial Intelligence

  1. Social workers’ AI tools risk errors in care records

    Artificial Intelligence

  2. Scammers use AI to clone pensioners’ voices

    Artificial Intelligence

  3. AI cuts more jobs in the UK than it creates

    Artificial Intelligence