Social media giants Facebook, Twitter, Instagram, YouTube and TikTok “fail” to take action on most anti-Muslim posts, according to a new study released Thursday by the Center for Countering Digital Hate (CCDH).
The five social media companies collectively did not respond to about 89 percent of anti-Muslim posts reported between February 15 and March 9, according to the international not-for-profit, which has offices in Washington, D.C., and London.
The CCDH’s “Failure to Protect” report comes amid ongoing debates about if and how social media platforms should be regulated moving forward. The platforms have largely been self-regulated to this point, but U.S. lawmakers have increasingly been calling for changes, and the European Union recently moved forward with legislation that would require social media platforms to battle the spread of misinformation.
The report starts with a reference to the support that Meta, Twitter and Google voiced for the Christchurch Call in the wake of the mass shootings of 2019 at mosques in Christchurch, New Zealand. The shootings, which were live streamed, prompted the launch of the Christchurch Call, which aims to stamp out online content featuring terrorism and violent extremism.
“Once again, their press releases prove to be nothing more than empty promises,” CCDH said of the companies’ earlier support for the Christchurch Call.
For the study released on Thursday, the CCDH said it identified 530 posts in total across the five platforms that “contain disturbing, bigoted, and dehumanizing content that target Muslim people through racist caricatures, conspiracies, and false claims.” The posts were “viewed at least 25 million times,” the CCDH said.
The organization said it reported all posts to the social media companies using each company’s “own reporting tools.” The CCDH said “many” of the posts were “easily identifiable” as “abusive content,” but “there was still inaction.”
After the posts were identified, the CCDH said its auditors went back through and “checked every post and recorded any action taken” by the companies.
Of the 125 posts the CCDH reported to Facebook, seven—about 5.6 percent—were removed. The CCDH said three of the 105 posts it reported to Twitter were removed, or about 2.9 percent.
Instagram and TikTok took more action against content and related profiles, according to the CCDH. Of the 227 posts reported to Instagram, 12 posts were removed and 20 accounts were disabled, resulting in a response rate of 14.1 percent. Twelve of the 50 posts reported to TikTok were removed and six accounts were disabled, leaving TikTok with a response rate of 36 percent.
Newsweek reached out to Twitter, Facebook, Instagram and TikTok for comment.
Meanwhile, the CCDH said YouTube took action on none of the 23 posts that the CCDH reported to the company. When reached Friday for comment, YouTube disputed the report’s findings, saying in a statement that it had taken action on some of the reported content.
“YouTube’s hate speech and harassment policies outline clear guidelines prohibiting content that promotes violence or hatred against individuals or groups where religion, ethnicity or other protected attributes are being targeted,” YouTube told Newsweek. “Of the videos flagged to us by CCDH, five have been removed for violating our hate speech policies and eight have been age-restricted.”
YouTube also directed Newsweek toward its community guidelines, which “make clear we do not allow hate speech or harassment on YouTube,” and its hate speech policy, which “specifically prohibits content when it promotes violence or hatred against individuals or groups based on attributes like their immigration status, nationality, or religion.”
YouTube did not specify which of the 23 reported posts it had removed or restricted, or when the platform responded to the reported posts. Newsweek reached out to YouTube for further comment.
CCDH told Newsweek it was unaware that action had been taken on any of the posts it flagged for YouTube.
CCDH CEO Imran Ahmed told Newsweek the steps YouTube took likely occurred after the CCDH’s last audit for its report.
“They didn’t respond fast,” Ahmed said. “The salient point here is, it’s good that they’ve acknowledged that these videos breach their standards, and taken action. And that begs the question, why they didn’t do so earlier.”
According to a CCDH post on its website about this week’s study, the collective findings “echo CCDH’s previous Failure to Act reports.”
This year’s report ends with several “calls to action,” which include the hiring and training of content moderators by social media companies, specific recommended actions to remove anti-Muslim pages and hashtags, and a series of suggested legislative strategies.