Updated

Under intense political pressure to better block terrorist propaganda on the internet, Facebook is leaning more on artificial intelligence.

The social-media firm said Thursday that it has expanded its use of A.I. in recent months to identify potential terrorist postings and accounts on its platform—and at times to delete or block them without review by a human. In the past, Facebook and other tech giants relied mostly on users and human moderators to identify offensive content. Even when algorithms flagged content for removal, these firms generally turned to humans to make a final call.

Companies have sharply boosted the volume of content they have removed in the last two years, but these efforts haven’t proven effective enough to tamp down a groundswell of criticism from governments and advertisers. They have accused Facebook, Google parent Alphabet Inc. and others of complacency over the proliferation of inappropriate content—in particular, posts or videos deemed as extremist propaganda or communication—on their social networks.

In response, Facebook disclosed new software that it says it is using to better police its content. One tool, in use for several months now, combs the site for known terrorist imagery, like beheading videos, in order to stop them from being reposted, executives said Thursday. Another set of algorithms attempts to identify—and sometimes autonomously block—propagandists from opening new accounts after they have already been kicked off the platform. Another experimental tool uses A.I. that has been trained to identify language used by terrorist propagandists.

This story originally appeared on The Wall Street Journal.