Tik Tok and Instagram accused of targeting children with self-harm and suicide content

Bruce mars ze Cdye9b Um I unsplash

TikTok and Instagram are under scrutiny after a new report found teenagers are being exposed to a "tsunami" of disturbing self-harm and suicide content.

The Molly Rose Foundation, established by Ian Russell after his 14-year-old daughter tragically took her own life after viewing harmful content, commissioned the research, using simulated accounts of a 15-year-old girl in the UK.

It found that both platforms still push a "tsunami" of harmful material through algorithm recommendations.

Con­tent still per­vas­ive’ across social media

The study reviewed 300 Instagram Reels and 242 TikToks. It concluded that 97% of Instagram Reels and 96% of TikToks recommended to these accounts were harmful. Over 40% of Instagram videos referenced self-harm or suicide, and some had millions of likes.

Ian Russell said, "It is staggering that eight years after Molly's death, incredibly harmful suicide, self-harm, and depression content like she saw is still pervasive across social media. The situation has got worse rather than better, despite the actions of governments and regulators and people like me. The report shows that if you strayed into the rabbit hole of harmful suicide self-injury content, it's almost inescapable."

He said that the results were "horrifying", and called for stronger, life-saving legislation.

Com­pan­ies defend teen safety measures

Meta, which owns Instagram, rejected the report’s conclusions and defended its teen safety measures.

TikTok also responded, citing over 50 built-in safety features and claiming 99% of harmful content is removed before being reported.

Technology Secretary Peter Kyle said the findings showed tech companies had allowed dangerous content to spread unchecked:

"These figures show a brutal reality - for far too long, tech companies have stood by as the internet fed vile content to children, devastating young lives and even tearing some families to pieces.

"But companies can no longer pretend not to see. The Online Safety Act, which came into effect earlier this year, requires platforms to protect all users from illegal content and children from the most harmful content, like promoting or encouraging suicide and self-harm. 45 sites are already under investigation."

Clear need for age verification

Ofcom noted that the data was collected before its latest child protection measures took effect in July and warned that tech firms not complying with the law could face enforcement action.

A separate report from the Children’s Commissioner found a rise in young people’s exposure to violent and degrading online pornography. Some respondents said they first encountered such content as early as age six.

The Commissioner has called for stricter enforcement of age verification measures to prevent underage access.

Share