World
a year ago

Is Meta censoring pro-Palestinian content? Human Rights Watch reports say yes

Published :

Updated :

In a damning 51-page report, Human Rights Watch (HRW) has accused Meta, the parent company of Facebook and Instagram, of engaging in 'systemic and global' censorship of pro-Palestinian content since the onset of the Israel-Gaza war on October 7. 

The report details over a thousand instances where Meta reportedly removed content and suspended or permanently banned accounts, exhibiting what HRW identifies as 'six key patterns of undue censorship.'

The alleged censorship includes the removal of posts, stories, and comments, disabling accounts, restricting users' ability to interact with others' posts, and employing 'shadow banning' techniques, reducing the visibility and reach of individuals expressing support for Palestine. 

HRW claims that the affected content originated from over 60 countries, predominantly in English, and represented 'peaceful support of Palestine, expressed in diverse ways.' Notably, even HRW's attempts to seek examples of online censorship were flagged as spam, according to the report.

In response to the allegations, Meta acknowledged making errors that may be 'frustrating' for users but vehemently denied the accusation of deliberate and systemic suppression of pro-Palestinian voices. 

The company argued that the claim of 'systemic censorship' based on 1,000 examples is misleading, emphasising the vast amount of content posted about the conflict.

Meta asserted that it is the only company globally to publicly release human rights due diligence on issues related to Israel and Palestine. The company defended its position, stating that enforcing policies during a fast-moving and highly polarised conflict, combined with an increase in reported content, poses challenges. Meta's statement emphasised providing everyone with a voice while maintaining platform safety.

This is not the first time Meta has faced accusations of silencing pro-Palestinian content. Democratic Senator Elizabeth Warren recently wrote to Meta's CEO, Mark Zuckerberg, seeking information about hundreds of reports from Instagram users claiming their content was demoted or removed. 

The company's oversight board also recently ruled that Meta was wrong to remove two videos depicting the aftermath of an airstrike and a woman being taken hostage during the October 7 attack, reinstating the clips.

Users of Meta's platforms have further documented what they believe to be technological bias in favour of pro-Israel content and against pro-Palestinian posts. 

Instances include Instagram's translation software, altering the meaning of Arabic phrases, and WhatsApp's AI generating different images for Palestinian and Israeli children. As Meta faces mounting scrutiny, questions about content moderation and bias continue to cast a shadow over the social media giant.

Share this news