The Meta Oversight Board has criticized the company's automated moderation tools for being too aggressive after two videos showing hostages, wounded civilians and possible casualties in the war between Israel and Hamas were, it says, unfairly removed from Facebook and Instagram. In a report released Tuesday, the external review panel determined that the posts should have remained published and that removing the content comes at a high cost to “freedom of expression and access to information” in the war. (A warning to our readers: the following content descriptions may be disturbing.)
One of the deleted videos, posted on Facebook, shows an Israeli woman during the Hamas attack on Israel on October 7, pleading with the kidnappers who were taking her hostage not to kill her. The other video was posted on Instagram and shows what appears to be the aftermath of an Israeli strike on or near al-Shifa hospital in Gaza City. The post contains images of dead or injured Palestinians, including children.
The board says that, in the case of this latest video, both the removal and the rejection of the user's appeal to restore the footage were performed by Meta's automated moderation tools, without any human review. The board initiated a review of the decision within an “expedited 12-day period,” and once the case was initiated, the videos were restored with a content warning screen.
In its report, the board found that moderation thresholds that had been lowered to more easily detect infringing content after the October 7 attack “also increased the likelihood that Meta mistakenly removed non-infringing content related to the conflict.” The board says that the lack of human-led moderation during this type of crisis can lead to the “inappropriate removal of speech that may be of significant public interest” and that Meta should have been quicker in allowing content “shared for the purpose of condemn”. , awareness raising, news reporting or calls for release” with a warning screen applied.
The board also criticized Meta for demoting the two reviewed posts with warning screens, preventing them from appearing as recommended content for other Facebook and Instagram users even though the company acknowledged that the posts were intended to raise awareness. Goal has since responded to the board's decision to overturn the removals, saying that because the panel did not provide recommendations, there will be no further updates to the case.
Meta is not the only social media giant coming under scrutiny for its handling of content related to the war between Israel and Hamas. Verified users on X (formerly Twitter) have been accused of being “misinformation super-spreaders” by the misinformation watchdog organization NewsGuard. TikTok and YouTube are also being scrutinized under the EU Digital Services Act following a rise in illegal content and misinformation on the platforms, and the EU has opened a formal investigation into X. The Oversight Board case, for On the contrary, it highlights the risks of excessive moderation. — and the platforms of the complicated line have to be walked.