قالب وردپرس درنا توس
Home / SmartTech / Facebook claims it cleared three million ISIS and Al Qaeda propaganda in the third quarter of 2018

Facebook claims it cleared three million ISIS and Al Qaeda propaganda in the third quarter of 2018



At congressional hearings last year, Facebook executives, including CEO Mark Zuckerberg, cited Facebook's success in using artificial intelligence and machine learning for terrorist content as an example of how they proactively use technology to proactively destroy others content that violates their policies, such as hate speech. Today, in a blog post, the company highlighted some of the new tools it has already used.

In the post attributed to Facebook Vice President Global Policy Monika Bickert, Facebook said it had lost 9.4 million pieces of terrorist content in the second quarter of 2018 and 3 million in the third quarter. In comparison, 1.9 million first quarter content was removed.

It is important to note that in this report, Facebook defines terrorist content as "content related to ISIS, Al Qaeda and its affiliates" and does not engage in breaking down content from other hate groups. Facebook's internal policies more generally define a terrorist organization and describe it as "any nongovernmental organization that intentionally executes acts of violence against persons or property in order to intimidate a civilian population, a government or an international organization for political, religious or ideological action Aim.

The increase in content Facebook had to reduce from Q1

to Q2 might seem a bit worrying at first. However, this is because during the second quarter, Facebook said it was more about older content. For the past three quarters, Facebook has reported proactively finding and removing 99 percent of its terrorist-related content, but the amount of content users have come to hear continues to rise from 10,000 in the first quarter to 16,000 in the third quarter , More statistics on how much terrorist content Facebook has removed in the last few quarters and how old it is can be found below:

More importantly, Facebook also has some new details about the tools it uses contained how it decides when something is done. Facebook states that machine learning is now being used to give posts a "rating" that indicates how likely it is that post-signals are from the Islamic State (aka ISIS), al Qaeda or other affiliated groups support. The Facebook review team prioritizes posts with the highest scores. If the score is high enough, Facebook sometimes removes the content even before human reviewers can look at it.

Facebook also said it recently began using audio and text hashing techniques – previously only image and video hashing was used. to discover terrorist content. It is now also experimenting with algorithms to identify posts whose lyrics violate terrorist policies in 19 languages.

Facebook did not say what other types of content these systems could soon use for detection, though it recognizes "ideological streaks" of "terrorists in many areas". "But it's clear that if Facebook uses machine learning to determine whether a post expresses support for a particular group or not, the same systems might be trained in the future to find support for other well-known hate groups. like white nationalists.

It is also noteworthy that, although Facebook sees the decline in time, the terrorist content on the platform has been a success, the company itself recognizes that this is not the best metric. 19659002] "Our analysis shows that the time taken to act is a less meaningful measure of the damage than metrics that explicitly focus on the actual exposure content. Bickert wrote. "Focusing only on the wrong metrics can lead to failures [sic] or prevent us from doing our most effective work for the community."


Source link