In moments of political tension and social conflict, people have turned to social media to share information, speak truth to power, and report uncensored information from their communities. Just over a decade ago, social media was celebrated widely as a booster—if not a catalyst—for the democratic uprisings that swept the Middle East, North Africa, Spain, and elsewhere. That narrative was always more complex than popular media made it out to be, and these platforms always had a problem sifting out misinformation from facts. But in those early days, social media was a means for disenfranchised and marginalized individuals, long overlooked by mainstream media, to be heard around the world. Often, for the first time. 

Yet in the wake of Hamas’ deadly attack on southern Israel last weekend—and Israel’s ongoing retributive military attack and siege on Gaza—misinformation has been thriving on social media platforms. In particular, on X (formerly known as Twitter), a platform stripped of its once-robust policies and moderation teams by CEO Elon Musk and left exposed to the spread of information that is false (misinformation) and deliberately misleading or biased (disinformation).  

It can be difficult to parse out verified information from information that has been misconstrued, misrepresented, or manipulated. And the entwining of authentic details and real newsworthy events with old footage or manufactured information can lead to information genuinely worthy of record—such as a military strike in an urban area—becoming associated with a viral falsehood.  Indeed, Bellingcat—an organization that was founded amidst the Syrian war and has long investigated mis- and disinformation in the region—found one current case where a widely shared video was said to show something false, but further investigation revealed that although the video itself was inauthentic, the information in the text of the post was accurate and highly newsworthy.

As we’ve said many, many times, content moderation does not work at scale, and there is no perfect way to remove false or misleading information from a social media site. But platforms like X have backslid over the past year on a number of measures. Once a relative leader in transparency and content moderation, X has been criticized for failing to remove hate speech and has disabled features that allow users to report certain types of misinformation. Last week, NBC reported that the publication speed on the platform’s Community Notes feature was so slow that notes on known disinformation were being delayed for days. Similarly, TikTok and Meta have implemented lackluster strategies to monitor the nature of content on their services. 

But there are steps that social media platforms can take to increase the likelihood that their sites are places where reliable information is available—particularly during moments of conflict. 

Platforms should:

  • have robust trust and safety mechanisms in place that are proportionate to the volume of posts on their site to address misinformation, and vet and respond to user and researcher complaints; 
  • ensure their content moderation practices are transparent, consistent, and sufficiently resourced in all locations where they operate and in all relevant languages; 
  • employ independent, third-party fact-checking, including to content posted by States and government representatives;
  • urge users to read articles and evaluate their reliability before boosting them through their own accounts; 
  • subject their systems of moderation to independent audits to assess their reliability, and
  • adhere to the Santa Clara Principles on Transparency and Accountability in Content Moderation and provide users with transparency, notice, and appeals in every instance, including misinformation and violent content. 

International companies like X and Meta are also subject to the European Union’s Digital Services Act, which imposes obligations on large platforms to employ robust procedures for removing illegal content and tackling systemic risks and abuse. Last week, European Commissioner for the Internal Market, Thierry Breton, urged TikTok, warned Meta, and called on Elon Musk to urgently prevent the dissemination of disinformation and illegal content on their sites, and ensure that proportionate and appropriate measures are in place to guarantee user safety and security online. While their actions serve as a warning to platforms that the European Commission is closely monitoring and considering formal proceedings, we strongly disagree with the approach of politicizing the DSA to negotiate speech rules with platforms and mandating the swift removal of content that is not necessarily illegal.

Make no mistake:  mis- and disinformation can readily work into the greater public dialogue. Take, for example, the allegation claiming that Hamas “decapitated babies and toddlers.” This was unverified, yet inflamed users on social media and led to more than five leading newspapers in the UK printing the story on their front page. The allegation was further legitimized when President Biden claimed to have seen “confirmed pictures of terrorists beheading children.” The White House later walked back this claim. Israeli officials have since reported that they cannot confirm babies were beheaded by Hamas. 

Another instance is the horrific allegations of rape and deliberate targeting of women and the elderly during the Saturday attack that have been repeated on social media as well as by numerous political figures, celebrities, and media outlets, including Senator Marco Rubio, Newsweek, the Los Angeles Times, and the Denver Post. President Biden repeated the claims in a speech after speaking with Israeli Prime Minister Netanyahu. The origin of the claims is unclear, but they are likely to have originated on social media. The Israeli Defense Force told the Forward that it “does not yet have any evidence of rape having occurred during Saturday’s attack or its aftermath.” 

Hamas is also poised to exploit the lack of moderation on X, as a spokesperson for the group told the New York Times. Because Hamas has long been designated by the United States and the EU as a terrorist organization, X has addressed Hamas content, stating that the company is working with the Global Internet Forum to Counter Terrorism (GIFCT) to prevent its distribution and that of other designated terrorist organizations. Still, the group has vowed to continue broadcasting executions, though it did not state on which platform it would do so. 

We are all vulnerable to believing and passing on misinformation. Ascertaining the accuracy of information can be difficult for users during conflicts when channels of communication are compromised, and the combatants, as well as their supporters, have self-interests in circulating propaganda. But these challenges do not excuse platforms from employing effective systems of moderation to tackle mis- and disinformation. And without adequate guardrails for users and robust trust and safety mechanisms, this will not be the last instance where unproven allegations have such dire implications—both online and offline.

Related Issues