Thursday, November 21, 2024
HomeNewsMeta's Oversight Board Investigates Report of Holocaust Denial Content

Meta’s Oversight Board Investigates Report of Holocaust Denial Content

Meta’s Oversight Board is turning its attention to a new case aligned with its strategic objectives. In an official statement, the board has announced its plans to thoroughly examine and welcome public input over the upcoming weeks regarding a case that questions Meta’s decision not to remove content denying the occurrence of the Holocaust on its platforms. Specifically, this instance revolves around a post circulating on Instagram, where an added speech bubble to an image featuring Squidward, a character from SpongeBob SquarePants, attempts to deny the historical reality of the Holocaust. The post’s accompanying caption and hashtags were deliberately tailored for “specific geographical audiences.”

Originally shared by an account with 9,000 followers in September 2020, the post garnered approximately 1,000 views. Some weeks later, Meta updated its content policies to explicitly prohibit Holocaust denial. Despite these newly established guidelines and numerous user reports flagging the content, the post remained visible for an extended period. Certain reports were automatically closed due to the company’s “COVID-19-related automation policies,” designed to enable Meta’s limited human reviewers to prioritize reports considered as “high-risk.” Other reporters received automated responses indicating that the content did not breach Meta’s policies.

- Advertisement -

One of the users who reported the content opted to escalate the case to the Oversight Board, which concluded that the matter aligns with its commitment to combatting “hate speech against marginalized groups.” The Board is now soliciting input on various pertinent matters, including the effectiveness of automation in accurately enforcing measures against hate speech and the value of Meta’s transparency reporting.

In a statement on Meta’s transparency page, the company acknowledged that the content remained accessible following an initial review. However, the company later acknowledged that this decision was a mistake and that the content did, in fact, violate its policy on hate speech. Subsequently, the content has been removed from Meta’s platforms. The company has committed to implementing the Oversight Board’s ruling. It’s important to note that while Meta’s Oversight Board can offer policy recommendations based on its investigations, these recommendations are not legally binding, and Meta retains the discretion to decide whether to adopt them. Depending on the responses received from the public in relation to the Board’s queries, potential recommendations could lead to alterations in how Meta employs automation to oversee content on Instagram and Facebook.

- Advertisement -
RELATED ARTICLES

Update Article

- Advertisment -

Most Popular