Facebook's artificial intelligence is keeping terrorist content off the social network, the company has said, hoping that AI will become a more important tool in the arsenal of protection and safety on the internet and on Facebook.
Facebook says that 99% of the ISIS and Al Qaeda-related terror content the company has removed from Facebook was content detected before anyone in the Facebook community had flagged it to the social network, and in some cases, before it went live on the site.
Facebook does that primarily through the use of automated systems like photo and video matching and text-based machine learning. "Once we are aware of a piece of terror content, we remove 83% of subsequently uploaded copies within one hour of upload," Facebook says.
However, deploying AI for counterterrorism is not simple. Depending on the technique, Facebook has to carefully curate databases or have human beings code data to train a machine. A system designed to find content from one terrorist group may not work for another because of language and stylistic differences in their propaganda. Because of these limitations, Facebook focuses its techniques on the terrorist groups that pose the biggest threat globally, in the real-world and online. ISIS and Al Qaeda meet this definition most directly, so Facebook is prioritizing its tools to counter these organizations and their affiliates.
Facebook has faced increased criticism around the globe from governments concerned that the social network provides a platform for terrorist propaganda and recruitment. British Prime Minister Theresa May has been particularly vocal in her attacks on social media companies and has sought to rally the leaders of other democracies to impose greater regulation on these tech businesses.
Facebook has joined forces with Microsoft, Twitter and Google's YouTube, to form the Global Internet Forum to Counter Terrorism. The group helps companies coordinate their efforts to combat terrorist content and to share insights with smaller technology companies.
Facebook says humans are still needed to curate the databases of terrorist posts, videos and photographs used to train Facebook's AI software. Human experts were also needed to review the decisions being made by the automated tools.