On Thursday, ahead of the November US election, Twitter announced a more detailed anti-misinformation policy. Beginning September 17, the platform will flag or remove tweets containing election fraud or early election results and treat these categories as particularly likely to cause immediate harm.
“The conversation on Twitter is never more important than during the elections,” said Twitter in a blog post that introduced the new policy. “Twitter is the place where people hear directly from elected officials and candidates for office. This is where they can find the latest news and, increasingly, an important source of information on when and how to vote in elections.”
Twitter’s new rules are likely to bring the platform into conflict with President Trump. Over the past few months, Trump has sent out numerous tweets making false or misleading statements about the November election and voting process. This new policy will either review or remove confusing posts about ballot box manipulation, election results and election fraud.
The Trump administration has already argued with Twitter over the application of fact-checking to the president’s tweets. After Twitter posted a fake tweet about mail-in votes in the summer, Trump signed an executive order aimed at lowering the protection of social media platforms under Section 230 of the Communications Decency Act. The Federal Communications Commission was asked earlier this year to reinterpret the central Internet law, but has yet to respond to the order.
Last week, Facebook also announced it would flag posts from candidates who prematurely declare victory. Facebook also said it would stop accepting new political ads a week before the November election, and expand its work to flagging posts that could suppress the vote. Twitter banned all political advertising last year.