Home / SmartTech / The graphic video of the suicide spreads from Facebook to TikTok to YouTube, as platforms fail the moderation test – TechCrunch

The graphic video of the suicide spreads from Facebook to TikTok to YouTube, as platforms fail the moderation test – TechCrunch



A graphic video of a man who committed suicide on Facebook Live has from there spread to TikTok, Twitter, Instagram, and now YouTube, where it ran alongside ads and attracted thousands of other views. Do what you want, these platforms seem unstoppable and reflect past mistakes, blocking violent acts and disinformation.

The original video was posted on Facebook two weeks ago and has found its way onto all major video platforms, often starting with harmless footage and ending with the man’s death. These techniques go back many years to evade automatic moderation. By the time people manually tag the video, the original goal of exposing ignorant viewers to the video is achieved.

It is similar in many ways to the way the COVID-1

9 disinformation plan spread and wreaked havoc, even though these platforms are using their supposedly significant moderating resources to prevent it.

For all of the platforms’ talks about advanced algorithms and the immediate removal of content that breaks rules, these events seem to show that they fail when they count most: in the extreme.

The video of Ronnie McNutt’s suicide was made on August 31 and took almost three hours to come down at all. At that point, countless people had seen and downloaded it. How can something so graphic, which clearly violates the standards of the platform and is actively flagged by the users, stay awake for so long?

In a “Community Standards Enforcement Report” published on Friday, Facebook admitted that its army of (contractor) human reviewers, whose ungrateful job it is to review violent and sexual content all day long, is partially hampering due to the pandemic had been.

With fewer content checkers, we’ve taken action against less content on both Facebook and Instagram to prevent suicide and self-harm, and child nudity and sexual exploitation on Instagram.

The number of objections is also much lower in this report as we were not always able to offer them. We informed people about it and if they felt they had made a mistake we still gave people the opportunity to let us know that they did not agree with our decision.

McNutt’s friend and podcast co-host Josh Steen told TechCrunch that the stream was tagged long before his death. “I strongly believe that because I knew him and how these interactions had worked if the stream had ended, I would have diverted his attention to SOME interventions,” Steen wrote in an email. “It’s pure speculation, but I think if they cut his electricity he wouldn’t have ended his life.”

When I asked Facebook about it, I got the same statement that others have: “We’re looking at how we could have turned off the live stream faster.” One certainly hopes so.

But Facebook can’t contain the spread of such videos – and the various shootings and suicides that have taken place on its live platform in the past – once they’re out there. At the same time, it’s hard to imagine how other platforms get caught flat: TikTok The video was queued on the “For You” users page, exposing countless people through an act of algorithmic irresponsibility. Even if it is not possible to keep the content off the service entirely, there should still be something that prevents it from being actively recommended to people.

Youtube is another, later culprit: Steen and others have recorded many cases where the video is powered by monetized accounts. He posted screenshots and videos of ads from Squarespace and the Motley Fool that ran before McNutt’s video.

It is disappointing that the world’s largest video hosting sites, which seem to keep thinking about their abilities to shut down this type of content, don’t seem to have a serious response. For example, TikTok prohibits any account that is created several tried to upload the clip. What’s the point of giving people a second or third chance here?

Facebook apparently couldn’t decide whether or not to violate the content, as evidenced by multiple re-uploads of the content in various forms that were not removed when tagged. Perhaps these are just the ones slipping through the cracks while thousands more are nipped in the bud, but why should we give the benefit of the doubt to a company like Facebook, which commands billions of dollars and tens of thousands of employees, when they fail to the nth? – have you ever looked at something so important?

“Facebook announced in early August that they were returning to normal moderation rates, but their AI technology was actually improving during the COVID slowdown,” Steen said. “So why did you totally screw up your reaction to the livestream and response time afterwards?”

“We know from the Christchurch Live incident that they can tell us some things that really need to be revealed at this point due to the spread of viruses: How many people overall saw the livestream and how many times was it shared and how many people watched the video and how many times was it shared? These statistics are important to me as they show the real-time impact of the video. I think this data will also confirm where the view ships have accumulated on the livestream, ”he continued.

Entire accounts have popped up on Twitter and Instagram to upload the video or impersonate McNutt with various transformations of his username. Some even add “suicide” or “dead” or something similar to the name. These are accounts that are created with the unique intent of breaking the rules. Where are the precautionary measures for fake and bot activities?

Videos of the suicide have appeared on YouTube and are indifferently removed. Others simply use McNutt’s picture or the earlier parts of his stream to attract viewers. Steen and others who knew McNutt have reported these regularly with mixed success.

One channel I saw had over half a million views by taking advantage of McNutt’s suicide, originally posting the live video (with preroll ad), and then using his face to lure in perhaps morbid users. When I pointed this out to YouTube, they demonized it and removed the one shown above – even though Steen and his friends reported it days ago. I can’t help but feel like next time – or rather somewhere else on the platform where it’s happening – there will be less or no accountability as there are no press outlets to make a fuss.

The focus of these platforms is on the invisible suppression of the content and the retention of users and activities. When tough measures reduce these important metrics, they won’t be taken, as we’ve seen on other social media platforms.

But, as these and other situations have shown before, the way in which this service is provided and monitored seems fundamentally lacking. Of course, it can be of great use for reporting current events, etc., but it can and has been used to stream horrific acts and other forms of abuse.

“These companies still don’t work fully together and still aren’t really honest,” said Steen. “That’s exactly why I created #ReformForRonnie, because we kept seeing that their reporting systems didn’t do anything. If nothing changes, it will just go on. “

Of course, Steen feels the loss of his friend, but also disappointment and anger at the platforms that make it possible to abuse and mock his image with only a superficial reaction. He gathered people around the hashtag to put pressure on the major social platforms to say something essential about this situation. How could they have prevented that? How can they handle it better when it’s already out there? How can they respect the wishes of their loved ones? Perhaps none of these things are possible – but if they are, don’t expect them to admit it.

If you or someone you know needs help, please call the National Suicide Prevention Lifeline at 800-273-TALK (8255) or text the Crisis Text Line at 741-741. International resources can be found here.


Source link