Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Gory beheading video inserted into notorious TikTok to shock unsuspecting users

The video has been added to TikTok’s blocking system, the short-form video company told The Independent

Adam Smith
Monday 07 June 2021 16:40 BST
Comments
(SOPA Images/LightRocket via Gett)

A viral TikTok video showing a girl being beheaded has been removed from the short-form video platform.

The video of the gory act was created on another platform, although its origin is unclear, and was spliced into a different TikTok so it could be seen and shared by unsuspected users. It appeared to originate from user “Mayenggo3”, whose account appears to be locked. The Independent was unable to contact the user.

The clip starts with a young girl dancing in front of a camera, before immediately cutting to a different video where the horrific act occurs.

“We appreciate the concerted effort by our community to warn about an unconscionably graphic clip from another site that was spliced into a video and brought onto TikTok”, a spokesperson for the company told The Independent.

“The original video was quickly removed, and our systems are proactively detecting and blocking attempted re-uploads of the clip to catch malicious behaviour before the content can receive views. We apologise to those in our community, including our moderators, who may have come upon this content.”

The video has been added to TikTok’s machine learning algorithm which will now automatically detect for it before it receives any views, however it is still not known whether the graphic act is legitimate.

TikTok’s community guidelines state that it does not “not allow content that is gratuitously shocking, graphic, sadistic, or gruesome or that promotes, normalizes, or glorifies extreme violence or suffering on our platform.”

It is also unclear whether TikTok’s automated moderation policy would take down content that has been sophisticatedly faked. In 2019, Facebook said that its own automatic detection failed to identify the video of the New Zealand mosque shooting because its systems could not distinguish it between first-person perspective video game footage.

A spokesperson for TikTok told The Independent that the company would investigate the possibility.

This is not the first time TikTok has been used to spread violent and shocking content. A video of a man taking his own life spreading across the platform in 2020, despite attempts to take it down.

Many social media companies, including TikTok, are attempting to be both more accurate and faster with regards to content moderation, but there are considerable challenges. As human intervention is too slow to adequately deal with the spread of content, companies often turn to artificial intelligence.

During the coronavirus pandemic, YouTube relied more on its algorithm to remove content, but even those systems have their failings too.

In a 2019 report, Ofcom noted that users will try and subvert machine-based moderation, that these systems cannot understand context as well as humans, and they can undermine freedom of expression.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in