Facebook prompt asks: It will also alert you if you may have been exposed to extremist content.
Social media has become a hotbed of extremism in recent years as political discussions have intensified, leading Facebook to ask if you’re worried that your friends or acquaintances online are becoming extremists.
The social media giant has begun issuing prompts to some users in the U.S. asking just that question, a company spokesman said Thursday. It also began notifying people who may have been exposed to extremist content, according to screenshots posted on Twitter.
One of the notifications, posted on Twitter, asks: “Are you concerned about someone you know becoming an extremist? We care about preventing extremism on Facebook. Other people in your situation have received confidential support.”
Another alert reads, “Violent groups are trying to manipulate your anger and frustration. You can take action now to protect yourself and others.”
Both alerts contain links to support resources for help.
Facebook, Google and Twitter have been under pressure for years to remove extremist content from their platforms before the violence spills over into the real world, but that attention has intensified this year amid increased scrutiny of the role their platforms played in preparing for the deadly riots at the U.S. Capitol in January.
The pilot program is part of the Facebook Redirect Initiative, which aims to combat violent extremism on the site by redirecting people looking for hate or violent terms to educational resources and outreach groups.
“This test is part of our larger effort to assess ways to provide resources and support to people on Facebook who may have participated in or been exposed to extremist content, or may know someone who is at risk,” a Facebook spokesperson said in a statement. “We are working with nongovernmental organizations and academic experts in the field and hope to have something to share in the future.”
Facebook said the program is part of its commitment to the Christchurch Call to Action, an international partnership between governments and technology companies that seeks to curb violent extremist content online after the mass murder of 51 people at a mosque in New Zealand that was broadcast live.
In February, Facebook said it had had to remove more content in the fourth quarter for violating rules against hate speech, harassment, nudity and other types of offensive content. It took action against 26.9 million pieces of hate speech content, he said, up from 22.1 million in the third quarter.
But the company also said the percentage of user views of hate speech, nudity, violent and graphic content on its platform is also declining. According to Facebook, for every 10,000 views of content, seven to eight views of hate speech.