Facebook Inc. announced in a corporate blog post today that it is updating its definition of “terrorist organizations,” and changing its counterterrorism team's structure to more effectively address the use of its platform to amplify white supremacist violence.
You can read the post here: Facebook: Combating Hate and Extremism.
LOL, we'll believe it when we see it.
But, I guess, good news?
Kyle Wiggers at VentureBeat has more. So does Davey Alba at the New York Times. The hand-waving by Facebook is about mea-culpa-ing performatively before a big hearing on Capitol Hill with lawmakers who want to spank Facebook for a whole host of sins.
Here's the Facebook statement in full:
Today, we’re sharing a series of updates and shifts that improve how we combat terrorists, violent extremist groups and hate organizations on Facebook and Instagram. These changes primarily impact our Dangerous Individuals and Organizations policy, which is designed to keep people safe and prevent real-world harm from manifesting on our services. Some of the updates we’re sharing today were implemented in the last few months, while others went into effect last year but haven’t been widely discussed.
Some of these changes predate the tragic terrorist attack in Christchurch, New Zealand, but that attack, and the global response to it in the form of the Christchurch Call to Action, has strongly influenced the recent updates to our policies and their enforcement. First, the attack demonstrated the misuse of technology to spread radical expressions of hate, and highlighted where we needed to improve detection and enforcement against violent extremist content. In May, we announced restrictions on who can use Facebook Live and met with world leaders in Paris to sign the New Zealand Government’s Christchurch Call to Action. We also co-developed a nine-point industry plan in partnership with Microsoft, Twitter, Google and Amazon, which outlines the steps we’re taking to address the abuse of technology to spread terrorist content.
Improving Our Detection and Enforcement
Two years ago, we described some of the automated techniques we use to identify and remove terrorist content. Our detection techniques include content matching, which allows us to identify copies of known bad material, and machine-learning classifiers that identify and examine a wide range of factors on a post and assess whether it’s likely to violate our policies. To date, we have identified a wide range of groups as terrorist organizations based on their behavior, not their ideologies, and we do not allow them to have a presence on our services. While our intent was always to use these techniques across different dangerous organizations, we initially focused on global terrorist groups like ISIS and al-Qaeda. This has led to the removal of more than 26 million pieces of content related global terrorist groups like ISIS and al-Qaeda in the last two years, 99% of which we proactively identified and removed before anyone reported it to us.
We’ve since expanded the use of these techniques to a wider range of dangerous organizations, including both terrorist groups and hate organizations. We’ve banned more than 200 white supremacist organizations from our platform, based on our definitions of terrorist organizations and hate organizations, and we use a combination of AI and human expertise to remove content praising or supporting these organizations. The process to expand the use of these techniques started in mid-2018 and we’ll continue to improve the technology and processes over time.
We’ll need to continue to iterate on our tactics because we know bad actors will continue to change theirs, but we think these are important steps in improving our detection abilities. For example, the video of the attack in Christchurch did not prompt our automatic detection systems because we did not have enough content depicting first-person footage of violent events to effectively train our machine learning technology. That’s why we’re working with government and law enforcement officials in the US and UK to obtain camera footage from their firearms training programs – providing a valuable source of data to train our systems. With this initiative, we aim to improve our detection of real-world, first-person footage of violent events and avoid incorrectly detecting other types of footage such as fictional content from movies or video games.
Updating Our Policy
While terrorism is a global issue, there is currently no globally recognized and accepted definition of terrorist organizations. So we’ve developed a definition to guide our decision-making on enforcing against these organizations. We are always looking to see where we can improve and refine our approach and we recently updated how we define terrorist organizations in consultation with counterterrorism, international humanitarian law, freedom of speech, human rights and law enforcement experts. The updated definition still focuses on the behavior, not ideology, of groups. But while our previous definition focused on acts of violence intended to achieve a political or ideological aim, our new definition more clearly delineates that attempts at violence, particularly when directed toward civilians with the the intent to coerce and intimidate, also qualify.
Giving People Resources to Leave Behind Hate
Our efforts to combat terrorism and hate don’t end with our policies. In March, we started connecting people who search for terms associated with white supremacy on Facebook Search to resources focused on helping people leave behind hate groups. When people search for these terms in the US, they are directed to Life After Hate, an organization founded by former violent extremists that provides crisis intervention, education, support groups and outreach. And now, we’re expanding this initiative to more communities.
We’re expanding this initiative to Australia and Indonesia and partnering with Moonshot CVE to measure the impact of these efforts to combat hate and extremism. Being able to measure our impact will allow us to hone best practices and identify areas where we need to improve. In Australia and Indonesia, when people search for terms associated with hate and extremism, they will be directed to EXIT Australia and ruangobrol.id respectively. These are local organizations focused on helping individuals leave the direction of violent extremism and terrorism. We plan to continue expanding this initiative and we’re consulting partners to further build this program in Australia and explore potential collaborations in New Zealand. And by using Moonshot CVE’s data-driven approach to disrupting violent extremism, we’ll be able to develop and refine how we track the progress of these efforts across the world to connect people with information and services to help them leave hate and extremism behind. We’ll continue to seek out partners in countries around the world where local experts are working to disengage vulnerable audiences from hate-based organizations.
Expanding Our Team
All of this work has been led by a multi-disciplinary group of safety and counterterrorism experts developing policies, building product innovations and reviewing content with linguistic and regional expertise to help us define, identify and remove terrorist content from Facebook and Instagram. Previously, the team was solely focused on counterterrorism — identifying a wide range of organizations including white supremacists, separatists and Islamist extremist jihadists as terrorists. Now, the team leads our efforts against all people and organizations that proclaim or are engaged in violence leading to real-world harm. And the team now consists of 350 people with expertise ranging from law enforcement and national security, to counterterrorism intelligence and academic studies in radicalization.
This new structure was informed by a range of factors, but we were particularly driven by the rise in white supremacist violence and the fact that terrorists increasingly may not be clearly tied to specific terrorist organizations before an attack occurs, as was seen in Sri Lanka and New Zealand. This team of experts is now dedicated to taking the initial progress we made in combating content related to ISIS, al-Qaeda and their affiliates, and further building out techniques to identify and combat the full breadth of violence and extremism covered under our Dangerous Organizations policy.
Remaining Committed to Transparency
We are committed to being transparent about our efforts to combat hate, which is why when we share the fourth edition of the Community Standards Enforcement Report in November, our metrics on how we’re doing at enforcing our policies against terrorist organizations will include our efforts against all terrorist organizations for the first time. To date, the data we’ve provided about our efforts to combat terrorism has addressed our efforts against Al Qaeda, ISIS and their affiliates. These updated metrics will better reflect our comprehensive efforts to combat terrorism worldwide.
We know that bad actors will continue to attempt to skirt our detection with more sophisticated efforts and we are committed to advancing our work and sharing our progress.
[From techmeme.com]