Following a content moderation warning from European Union regulators earlier this week, Meta has published an overview of how it is responding to risks on its social media platforms stemming from the war between Israel and Hamas.
Is blog post covers what it frames as “ongoing efforts,” with some existing policies and tools for users reviewed. But the company confirms that it has made some changes in light of the rapid developments in Israel and Gaza.
These include what it says is a temporary expansion of its Violence and Incitement policy to prioritize the safety of Israelis kidnapped by Hamas.
Under this change, Meta says it will remove content that “clearly identifies hostages when we become aware of it, even if it is done to condemn or raise awareness of their plight.” “We allow content with blurry images of victims but, in accordance with the standards set by the Geneva Convention, we will prioritize the safety and privacy of kidnapping victims if we are unsure or unable to make a clear assessment,” she added. .
Meta also says it is prioritizing controls on live streaming features on Facebook and Instagram, including monitoring for any attempts by Hamas to use the tools to broadcast images of captured Israelis or other hostages.
In a particularly disturbing report on Israeli media Circulating widely on social media this week, a girl told how she and her family had learned of her grandmother’s death after Hamas militants uploaded a video of her body to Facebook, apparently using her own mobile phone to Post the graphic content to the deceased woman’s Facebook page.
“We recognize that the immediacy of Live presents unique challenges, which is why we have current restrictions regarding the use of Live for people who have previously violated certain policies. “We are prioritizing live streaming reporting related to this crisis, beyond our current live video prioritization,” Meta wrote, highlighting the action it took in the wake of the Christchurch attacks in New Zealand in 2019, when a only shooter livestreamed a massacre targeting two mosques on Facebook.
“We are also aware of Hamas’ threats to release images of the hostages and we take these threats very seriously. “Our teams are monitoring this closely and would quickly remove any such content (and the accounts behind it), storing the content on our systems to prevent re-sharing,” he added.
Other measures taken by Meta to respond to the situation in Israel and Gaza include making its systems less likely to actively recommend potentially infringing or dubious content and reducing the visibility of potentially offensive comments; and apply hashtag blocking so that certain terms related to the conflict cannot be searched on their platforms. Their blog post does not specify which hashtags Meta is blocking in relation to the war between Israel and Hamas.
Meta’s blog post also says it established a special operations center staffed with experts, including Arabic and Hebrew speakers, to improve its ability to quickly respond to content reports.
It also says it is receiving feedback from local partners (such as NGOs) about emerging risks and says it is “acting quickly to address them.”
“In the three days after October 7, we removed or marked as disturbing more than 795,000 pieces of content for violating these policies in Hebrew and Arabic,” he wrote. “Compared to the previous two months, in the three days since October 7, we have removed seven times as much content daily for violating our Dangerous Organizations and Individuals policy in Hebrew and Arabic only.”
In light of the attention and concern over the situation, Meta says it’s possible that non-infringing content could be removed “by mistake.”
“To mitigate this, for some violations we are temporarily removing content without strikes, which means that these content removals will not cause accounts to be deactivated,” he points out. “We also continue to provide tools so that users appeal our decisions if they think we made a mistake.”
Compliance with the bloc’s Digital Services Act (DSA) came into effect for Meta in August as the owner of the so-called large online platform (VLOP).
The Commission designated 19 VLOPs in April, including Meta-owned Facebook and Instagram.
The designation places an obligation on VLOPs to respond diligently to reports of illegal content, as well as to clearly communicate their terms and conditions to users and properly enforce their terms. But it also has a broader scope: it requires these larger platforms to take steps to identify and mitigate systemic risks like disinformation.
The regulation also contains a “crisis response” mechanism that the Commission can adopt on VLOPs in situations where the use of their platforms could contribute to serious threats such as war.
Penalties for failing to comply with pan-European regulation can reach up to 6% of global annual turnover, which, in the case of Meta, could amount to several billion.
The social media giant isn’t the only one that has been warned by the bloc over content concerns related to the war between Israel and Hamas: Elon Musk’s X has been singled out for even more attention here, and the bloc issued a “urgent” warning earlier this week. and then with a formal request for information about your compliance approach.
TikTok also received a warning from the EU about the conflict-related risks of DSA content.