The European Union on Monday announced a formal investigation into
He consultation It is perhaps the most substantial regulatory action to date against X since it scaled back its content moderation policies after Musk bought the service, once known as Twitter, last year. According to researchers, the company's new policies have led to an increase in inflammatory content on the platform, causing brands to reduce advertising.
In pursuing X, the European Union is for the first time using the authority gained after last year's passage of the Digital Services Act. The law gives regulators sweeping new powers to force social media companies to police their platforms for hate speech, misinformation and other divisive content.
The European Commission, the executive branch of the 27-nation bloc, had signaled its intention to take a closer look at X's business practices. In October, regulators launched a preliminary investigation into the spread of “terrorist and violent content and hate speech.” ” in X after the start of the conflict between Israel and Gaza.
X did not respond to a request for comment.
The research highlights an important difference between the United States and Europe in Internet surveillance. While online publications are largely unregulated in the United States as a result of free speech protections, European governments, for historical and cultural reasons, have placed more restrictions around hate speech, incitement to violence and other harmful materials.
The Digital Services Act was an attempt by the EU to force companies to implement procedures to more consistently comply with rules regarding such online content.
Monday's announcement marks the beginning of an investigation without a specific deadline. The investigation is expected to include interviews with outside groups and requests for more evidence from X. If found guilty of violating the Digital Services Act, the company could be fined up to 6 percent of global revenue.
EU officials said X may not comply with rules requiring online platforms to respond quickly after becoming aware of hateful and unlawful content, such as anti-Semitism and incitement to terrorism. The law also requires companies to conduct risk assessments on the spread of harmful content on their platforms and implement mitigation measures.
Officials also raised concerns about X's non-English content moderation policies, particularly as continent-wide elections approach in 2024.
Additionally, the investigation will examine X's efforts to address the spread of false information. The company relies on a feature, called Community Notes, that allows users to add context to posts they believe are misleading, an approach that EU officials say may not be enough. Regulators will also study ways to give more visibility to posts from X users who pay to be authenticated, symbolized by a blue check mark.