Hamas is barred from Facebook, removed from Instagram and run off TikTok. Yet posts supporting the group that carried out terrorist attacks in Israel this month are still reaching mass audiences on social networks, spreading gruesome footage and political messages to millions of people.
Several accounts sympathetic to Hamas have gained hundreds of thousands of followers across social platforms since the war between Israel and Hamas began on Oct. 7, according to a review by The New York Times.
One account on Telegram, the popular messaging app that has little moderation, reached more than 1.3 million followers this week, up from about 340,000 before the attacks. That account, Gaza Now, is aligned with Hamas, according to the Atlantic Council, a research group focused on international relations.
“We’ve seen Hamas content on Telegram, like bodycam footage of terrorists shooting at Israeli soldiers,” said Jonathan A. Greenblatt, the chief executive of the Anti-Defamation League. “We’ve seen images not just on Telegram but on the other platforms of bloodied and dead soldiers.”
Such posts are the latest challenge for technology companies as many of them try to minimize the spread of false or extremist content while preserving content that does not run afoul of their rules. In past conflicts, like the genocide in Myanmar or other attacks between Palestinians and Israel, social media companies struggled to strike the right balance, with watchdog groups criticizing their responses for being too limited or sometimes overzealous.
Experts said Hamas and Hamas-linked social media accounts were now exploiting those challenges to evade moderation and share their messages.
Most online platforms have a long banned terrorist organizations and extremist content. Facebook, Instagram, TikTok, YouTube and X (formerly Twitter) have banned accounts linked to Hamas or posts that are overtly sympathetic to its cause, saying they violate their content policies against extremism.
Gaza Now had more than 4.9 million followers on Facebook before it was banned last week, shortly after The Times contacted Meta, Facebook’s parent company, about the account. Gaza Now did not post the kinds of gruesome content found on Telegram, but it did share accusations of wrongdoing against Israel and encouraged its Facebook followers to subscribe to its Telegram channel.
Gaza Now also had more than 800,000 collective followers across other social media sites before many of those accounts were also removed last week. Its YouTube channel had 50,000 followers but had not updated since the conflict began. The account was suspended on Tuesday.
In a statement, a spokesman for YouTube said Gaza Now violated the company’s policies because the channel’s owner had previously operated an account on YouTube that was terminated.
Telegram has emerged as the clearest launching pad for pro-Hamas messaging, experts said. Accounts there have shared videos of captured prisoners, dead bodies and destroyed buildings, with followers often responding with the thumbs-up emoji. In one instance, users directed one another to upload gruesome footage of Israeli civilians being shot to platforms like Facebook, TikTok, Twitter and YouTube. The comments also included suggestions on how to alter the footage to make it difficult for social media companies to easily find and remove it.
Telegram also hosts an official account for Al-Qassam Brigades, Hamas’s military wing. Its follower count has tripled since the conflict began.
Pavel Durov, the chief executive of Telegram, wrote in a post last week that the company had removed “millions of obviously harmful content from our public platform.” But he indicated that the app would not bar Hamas outright, saying those accounts “serve as a unique source of first-hand information for researchers, journalists, and fact-checkers.”
“While it would be easy for us to destroy this source of information, doing so risks exacerbating an already dire situation,” Mr. Durov wrote.
X, which Elon Musk owns, was overrun with falsehoods and extremist content almost as soon as the conflict began. Researchers at the Institute for Strategic Dialogue, which tracks hate and extremism online, found that in one 24-hour period, a collection of posts on X that supported terrorist activities received over 16 million views. The European Union said it would examine whether X violated a European law that required large social networks to stop the spread of harmful content. X did not respond to a request for comment.
Yet accounts not directly claimed by Hamas present thornier challenges for social media companies, and users have criticized the platforms for being overzealous in removing pro-Palestinian content.
Thousands of Palestinian supporters said Facebook and Instagram had suppressed or removed their posts, even when the messages did not break the platforms’ rules. Others reported that Facebook had suppressed accounts that called for peaceful protests in cities around the United States, including planned sit-ins in the San Francisco area over the weekend.
Meta said in a blog post on Friday that Facebook could have inadvertently removed some content as it worked to respond to a surge in reports of content that violated the site’s policies. Some of those posts were hidden because of an accidental bug in Instagram’s systems that was not showing pro-Palestinian content on its Stories feature, the company said.
Masoud Abdulatti, a founder of a health care services company, MedicalHub, who lives in Amman, Jordan, said that Facebook and Instagram had blocked his posts supporting Palestinians, and that he had turned to LinkedIn to share support for civilians in Gaza who were trapped in the middle of the conflict.
“The people of the world are ignorant of the truth,” Mr. Abdulatti said.
Eman Belacy, a copywriter who lives in Sharkia governorate in Egypt, noted that she normally used her LinkedIn account only for business networking but had begun posting about the war after she felt that Facebook and Instagram were not showing the full picture of the devastation in Gaza.
“It might not be the place to share war news, but excuse us, the mount of injustice and hypocrisy are unbearable,” Ms. Belacy said.
The challenges reflect the blunt content moderation tools that social networks have increasingly relied on, said Kathleen Carley, a researcher and professor at the CyLab Security and Privacy Institute at Carnegie Mellon University.
Many companies, she said, rely on a blend of human moderators — who can be quickly overrun during a crisis — and some computer algorithms, with no coordination between platforms.
“Unless you do content moderation consistently, for the same story across all the major platforms, you’re just playing Whac-a-Mole,” Ms. Carley said. “It’s going to resurface.”
Sheera Frenkel contributed reporting.