While there have been no major terrorist attacks in Western Europe in recent months, brokers and insurers offering terrorism cover should be quick to inform clients that the threat has far from disappeared – as a recent Facebook clampdown proves.
This week the social media giant shared a series of shifts aimed at improving how it combats terrorists, violent extremist groups, and hate organisations not only on Facebook but also on Instagram. The changes form part of the company’s dangerous individuals and organisations policy, the goal of which is to prevent real-world harm from manifesting on its platforms.
Facebook said the updates include those implemented in the last few months as well as others that went into effect in 2018 but have not been widely discussed.
“Some of these changes predate the tragic terrorist attack in Christchurch, New Zealand, but that attack, and the global response to it in the form of the Christchurch Call to Action, has strongly influenced the recent updates to our policies and their enforcement,” stated the social networking enterprise.
“First, the attack demonstrated the misuse of technology to spread radical expressions of hate, and highlighted where we needed to improve detection and enforcement against violent extremist content. In May, we announced restrictions on who can use Facebook Live and met with world leaders in Paris to sign the New Zealand government’s Christchurch Call to Action.”
According to Facebook, it also co-developed a nine-point industry plan in partnership with Microsoft, Twitter, Google, and Amazon. It outlines the steps being taken to address the abuse of technology to spread terrorist content.
As far as terrorist content is concerned, Facebook said it has removed more than 26 million pieces of content related to the likes of ISIS and al-Qaeda in the last two years. It added that 99% of these were proactively identified and taken down by Facebook before anyone reported it to them.
Meanwhile, the firm has since expanded the use of its detection techniques to a wider range of dangerous organisations, including both terrorist and hate groups.
“We’ve banned more than 200 white supremacist organisations from our platform, based on our definitions of terrorist organisations and hate organisations, and we use a combination of AI and human expertise to remove content praising or supporting these organisations,” revealed Facebook.
“The process to expand the use of these techniques started in mid-2018 and we’ll continue to improve the technology and processes over time.”
As for the Christchurch attack, which was livestreamed by the perpetrator, Facebook admitted that the video did not prompt the platform’s automatic detection systems because they did not have enough content depicting first-person footage of violent events to effectively train the company’s machine learning technology.
“That’s why we’re working with government and law enforcement officials in the US and UK to obtain camera footage from their firearms training programs – providing a valuable source of data to train our systems,” said Facebook.
“With this initiative, we aim to improve our detection of real-world, first-person footage of violent events and avoid incorrectly detecting other types of footage such as fictional content from movies or video games.”
Meanwhile, these changes and a slew of other initiatives have been led by what Facebook described as a multi-disciplinary group of safety and counter-terrorism experts.
“Previously, the team was solely focused on counter-terrorism – identifying a wide range of organisations including white supremacists, separatists, and Islamist extremist jihadists as terrorists,” noted Facebook.
“Now, the team leads our efforts against all people and organisations that proclaim or are engaged in violence leading to real-world harm. And the team now consists of 350 people with expertise ranging from law enforcement and national security, to counter-terrorism intelligence and academic studies in radicalisation.”
Facebook added: “We’ll need to continue to iterate on our tactics because we know bad actors will continue to change theirs.”