Facebook removed more than 14 million "terrorist content" on its platform by September of this year, connecting it to the Islamic State (IS), Al Qaeda and their affiliates.
In the second quarter (April June) 2018, Facebook took action against 9.4 million terrorist content, most of which was made from old material using special techniques.
In the third quarter (July-September), the total terrorist attacks of fell 3 million of which 800,000 contents were old.
"In both Q2 and Q3, we found that more than 99 percent of the IS and Al Qaeda content was eventually removed by ourselves before it was reported by anyone in our community," said Monika Bickert, Global Head of Policy Management in a blog post on Thursday.
"These numbers represent a significant increase over the first quarter of 2018, when we launched 1.9 million pieces of content, of which 640,000 were identified using special tools for finding legacy content," she added.  The US Department of Justice recently discovered a suspected IS supporter who warned others to be cautious when posting propaganda on Facebook, and referred to the terrorism metrics in the first quarter of 2018 as proof that the propaganda of the social media giant has become increasingly difficult. "We're now using machine learning to rate Facebook posts that may signal support for IS or Al Qaeda," said Brian Fishman, head of anti-terrorism policy on Facebook.
"In some cases, we will automatically remove items if the machine learning tool indicates with great confidence that the position contains support for terrorism."
According to Facebook, the new machine learning tools have reduced time, in the terrorist content – reported by users – to stay on the platform – from 43 hours in the first quarter of 2018 to 18 hours in the third quarter of 2018.