Social Media

Facebook fights against terrorist content on its site using A.I., human expertise

Why it matters to you

To help fight terrorism, Facebook is increasing its efforts to keep the platform a safe space for all users.

As the number of terrorist attacks continue to increase globally, Facebook is making an attempt to be completely transparent about its plans to keep terrorist content off its website. To make efforts more efficient, the company enlisted the help of both artificial intelligence and human expertise.

To kick off the initiative, Facebook introduced a thread called “Hard Questions” as a safe space to discuss complicated subjects. The first post within the thread is titled “How We Counter Terrorism” and written by Monika Bickert, Facebook’s director of global policy management, and Brian Fishman, counterterrorism policy manager, who explain in detail how Facebook is committed to making the platform a hostile environment for terrorists.

The post lists a number of current tactics that use AI, including image matching — where systems search for whether or not an uploaded image matches any terrorism content previously removed by Facebook — to prevent other accounts from posting the same photo or video. Another experiment Facebook is currently running involves analyzing text previously removed for supporting terrorist organizations in an effort to create text-based signals. This will help to strengthen the algorithm in place so it catches similar posts at a quicker speed.

To prevent AI from flagging a photo related to terrorism in a post like a news story, human judgment is still required. In order to ensure constant monitoring, the community operations team works 24 hours a day and its members are also skilled in dozens of languages. The company also added more people to its team of terrorism and safety specialists specifically — ranging from former prosecutors to engineers — whose responsibility is to concentrate solely on countering terrorism.

Facebook will continue to see employee growth after CEO Mark Zuckerberg announced plans to expand the community operations team by adding 3,000 more employees across the globe — this decision came after a string of violent deaths and incidents were broadcast over Facebook Live. By having a larger team of reviewers, Zuckerberg pointed out that inappropriate content can be taken down faster and the response rate for those in danger can potentially be higher.

The company continues to develop different partnerships with researchers, governments, and other companies — including Microsoft, YouTube, and Twitter. These businesses continuously contribute to the shared database dedicated to gathering terrorist content.

Let’s block ads! (Why?)

Social Media–Digital Trends

Click to comment

You must be logged in to post a comment Login

Leave a Reply

To Top
Social Media Auto Publish Powered By : XYZScripts.com