قالب وردپرس درنا توس
Home / Gadgets / Why did not Tech stop the attack on New Zealand?

Why did not Tech stop the attack on New Zealand?



At least 49 people were murdered on Friday in two mosques in Christchurch, New Zealand, in an attack following a grim playbook for terrorism in the age of social media. The shooter apparently seeded warnings on Twitter and 8chan, before spending 17 minutes on Facebook experiencing Rampage on Facebook. Almost immediately people copied versions of the video on the Internet, including YouTube, Twitter and Reddit. News organizations also sent out some of the footage as they reported the destruction.

When Silicon Valley executives woke up on Friday morning, the algorithms of technology giants and the armies of international content presenters were already struggling to deal with damage ̵

1; and not very successful. Many hours after the shoot started, various versions of the video on YouTube were easily searchable using simple keywords such as the shooter's name.

This is hardly the first time that this pattern has come to light. It's been nearly four years since two Virginia reporters were shot and killed in front of the camera while the killer's first person video was distributed on Facebook and Twitter. It's been almost three years since mass-recording in Dallas was also viral.

The Christchurch massacre rightly made people wonder why, after all this time, technology companies have not found a way to prevent the distribution of these videos. The answer may be disappointingly simple: it's a lot harder than it sounds.

For years, both Facebook and Google have been developing and implementing automated tools to detect and remove photos, videos, and text that violate their policies. Facebook uses PhotoDNA, a tool developed by Microsoft to recognize familiar images and videos of child pornography. Google has developed its own open source version of this tool. These companies have also invested in technology to locate extremist offices. They have teamed up under the name Global Internet Forum to Terrorism to share their repositories with known terrorist content. These programs generate digital hashes for images and videos that are known to be problematic in preventing them from being re-uploaded. Facebook and others have also invested in machine learning technology, which has been trained to detect new problematic content, such as: B. a decapitation or a video with an ISIS flag. All of this is in addition to the AI ​​tools that detect prosaic issues like copyright infringement.

Automated moderation systems are incomplete but can be effective. For example, on YouTube, the vast majority of videos are removed by automation, and 73 percent of the auto-tagged videos are removed before being viewed by a single person.

However, with live video and files, it becomes much harder to watch videos being broadcast in the news. The footage of the Christchurch shootings examines both boxes.

"They have failed to have an effective AI to proactively suppress this type of content, even if it is the richest money […] industry in the world," says Dipayan Ghosh, an employee of Kennedy School of Harvard and former member of Facebook's privacy and privacy team. That's one reason why both Facebook and YouTube have teams of moderators checking content around the world.

Motherboard has a revealing article on how Facebook's content moderators rate live videos tagged by users. According to internal documents received by Motherboard moderators have the option to ignore the video, delete it, re-view it within five minutes, or forward it to specialized review teams. These documents state that moderators are also encouraged to look for warning signs in live videos, such as "weeping, asking, begging," and "displaying or making sounds of weapons or other weapons (knives, swords) in any context."

It's unclear why the Christchurch video could be played for 17 minutes, or even if this is a short timeframe for Facebook. The company did not respond to WIRED's questions on this topic or to the question of how Facebook differentiates between valuable content and unfounded violence.

Instead, Facebook has sent this statement to WIRED. "Our hearts go out to the victims, their families and the community affected by this terrible act. The New Zealand police made us aware of a video on Facebook shortly after the livestream started, and we removed both the shooter's Facebook and Instagram accounts and the video. We also remove any praise or support for the crime and the shooter or shooter as soon as we know. We will continue to work directly with the New Zealand police as their response and investigation continues.

The New Zealand Google spokesman sent a similar statement on WIRED's questions. "Our heart goes to the victims of this terrible tragedy. Shocking, violent and graphic content has no place on our platforms and will be removed as soon as we hear about it. As with any major tragedy, we will work cooperatively with the authorities.

The Google spokesman added, however, that videos of the shoot that have a news value are still preserved. This puts the company in the tricky position of having to decide which videos are actually news. For tech companies, it would be much easier to imagine with blunt force and to ban every clip of the shoot being posted on their websites. It may use the fingerprinting technology that removes child pornography. Some may argue that this is worth considering. However, Facebook and YouTube have made explicit exceptions to news organizations in their content moderation guidelines. In other words, the same clip that aims to glorify filming on a YouTube account could also appear in a news story from a local news company.

In particular, YouTube has been criticized in the past for deleting videos of atrocities in Syria on which the researchers relied. Tech companies remain in a difficult position not only to judge the value of the news, but also to find ways to automate these scores in scale.

As Google's General Counsel Kent Walker wrote in a blog post back in 2017: "Machines can help identify problematic videos, but human experts still play a role in differentiated decisions regarding the line between violent propaganda and religious or modern speech . "

Of course, there are signals these companies can use to determine origin and purpose of a video, Ghosh says." The timing of the content, the historical measure of what the supplier of the content has spent in the past, These are the types of signals you need to use when you get into those inevitable situations where you have news organizations and individual pushing from the same content, but you just want the news organization to do that, "says Ghosh.

Ghosh argues that one reason why tech companies have not gotten better is because they lack concrete incentives. "There is no stick in the air that forces them to have better moderation systems for content," he says. Last year, European Commission regulators published a fines reduction proposal for extremist content to stay online for more than an hour.

Finally, there is the constant problem of magnitude. It's possible that both YouTube and Facebook have become too big to moderate. Some have suggested that if these Christchurch videos show up faster than YouTube can turn them off, YouTube will have to break all video uploads until the problem is solved. However, it is not clear which voices could be silenced at this time – despite their shortcomings, social media platforms can also be valuable sources of information in breaking news. Incidentally, the sad truth is that if Facebook and YouTube cease operations every time a hideous post becomes viral, they may never start over again.

All of this, of course, is exactly the shooter's strategy: to exploit people's behavior and technology inability to keep up with him to consolidate his terrible heritage.

Tom Simonite contributed to the coverage.

This is an evolving story. We will update as more information becomes available.


More great WIRED stories


Source link