Facebook Removes More ISIS Content by Actively Looking for It

Facebook Removes More ISIS Content by Actively Looking for It
Fotolia

Facebook said it was able to remove a larger amount of content from the Islamic State and al-Qaeda in the first quarter of 2018 by actively looking for it, according to Bloomberg.

The company has trained its review systems, both humans and computer algorithms, to seek out posts from terrorist groups. The social network took action on 1.9 million pieces of content from those groups in the first three months of the year, about twice as many as in the previous quarter. And, 99 percent of that content wasn’t reported first by users, but was flagged by the company’s internal systems, Facebook said.

Facebook, like Twitter and YouTube, has historically put the onus on its users to flag content that its moderators need to look at. After pressure from governments to recognize its immense power over the spread of terrorist propaganda, Facebook started about a year ago to take more direct responsibility. CEO Mark Zuckerberg earlier this month told Congress that Facebook now believes it has a responsibility over the content on its site.

The company defines terrorists as non-governmental organizations that engage in premeditated acts of violence against people or property to intimidate and achieve a political, religious or ideological aim. That definition includes religious extremists, white supremacists and militant environmental groups. “It’s about whether they use violence to pursue those goals.“ The policy doesn’t apply to governments, Facebook said, because “nation-states may legitimately use violence under certain circumstances.“

Facebook didn’t give any numbers for its takedown of content from white supremacists or other groups it considers to be linked to terrorism, in part because the systems have focused training so far on the Islamic State and al-Qaeda.