Facebook to crack down on extremism by training AI with police videos
{{#rendered}} {{/rendered}}
Facebook will work with law enforcement agencies to train its artificial intelligence systems to detect videos of violent events as part of its ongoing battle against extremism on the platform.
The new effort, announced in a Tuesday blog post, will harness body-cam footage of firearms training provided by U.S. and U.K. government and law enforcement agencies as a way to train systems to automatically detect first-person violent events -- without also sweeping up violence from movies or video games.
The tech giant came under fire earlier this year when its AI systems were unable to detect a live-streamed video of a mass shooting at a mosque in Christchurch, New Zealand. The company eventually imposed some new restrictions on live-streaming.
{{#rendered}} {{/rendered}}
The Mark Zuckerberg-led company is also expanding its definition of terrorism to include not just acts of violence attended to achieve a political or ideological aim, but also attempts at violence, especially when aimed at civilians with the intent to coerce and intimidate.
REPUBLICAN SENATORS SLAM FACEBOOK FOR 'CENSORSHIP' OF PRO-LIFE CONTENT
CALIFORNIA'S FACIAL RECOGNITION BAN FOR POLICE BODY CAMERAS HEADS TO GOVERNOR'S DESK
{{#rendered}} {{/rendered}}
Facebook has been trying to stem the tide of extremist content over the years. In March, it banned material from white nationalist and white separatist groups. The social network says it has banned 200 white supremacist organizations and 26 million pieces of content related to global terrorist groups like ISIS and al Qaeda.
The company is facing a wide range of challenges, including an antitrust investigation from a group of state attorneys general, along with a broader probe of Big Tech being led by the FTC and the Justice Department.
{{#rendered}} {{/rendered}}
More regulation might be needed to deal with the problem of extremist material, Dipayan Ghosh, a former Facebook employee and White House tech policy adviser, told The Associated Press.
“Content takedowns will always be highly contentious because of the platforms’ core business model to maximize engagement,” he said. “And if the companies become too aggressive in their takedowns, then the other side — including propagators of hate speech — will cry out.”