A pair of cases brought before the US Supreme Court this week could upend the rules of the internet, targeting a powerful, decades-old statute.
At stake is a question that has been central to the rise of big tech: should companies be held legally accountable? for the content your users post? Until now they have shied away from responsibility, but some US lawmakers and others want to change that. And new lawsuits are taking the statute before the supreme court for the first time.
Both cases were brought by relatives of terror attack victims who say social media companies are responsible for stoking violence with their algorithms. The first case, Gonzalez vs. Google, is expected to be heard on February 21 and will ask the US Supreme Court to determine whether YouTube, the Google-owned video website, should be held liable for recommending Islamic State terrorism videos. The second, to be heard on February 22, targets Twitter and Facebook as well as Google with similar accusations.
Together they could represent the the biggest challenge yet to Section 230 of the Communications Decency Act, a statute that protects tech companies like YouTube from being held liable for content shared and recommended by their platforms. The stakes are high: A ruling in favor of holding YouTube accountable could expose all platforms, large and small, to potential litigation over user content.
While lawmakers across the aisle have pushed for reforms to the 27-year-old statute, contending companies should be held accountable for hosting harmful content, some civil liberties organizations and tech companies have warned that changes to Section 230 could irreparably weaken freedom of expression. internet protections.
Here’s what you need to know.
What are the details of the two cases?
Gonzalez v Google focuses on whether Google can be held accountable for the content its algorithms recommend, threatening the longstanding protections online publishers have enjoyed under Section 230.
YouTube’s parent company, Google, is being sued by the family of Nohemí González, a 23-year-old US citizen who was studying in Paris in 2015 when she was killed in coordinated Islamic State attacks in and around the French capital. The family is seeking to appeal a ruling that held that Section 230 protects YouTube from being held liable for recommending content that incites or calls for acts of violence. In this case, the content in question was IS recruitment videos.
“The defendants are alleged to have recommended that users view incendiary videos created by ISIS, videos that played a key role in recruiting fighters to join ISIS in its subjugation of a large area of the Middle East and committing terrorist acts on their behalf. countries of origin. ,” court documents read.
In the case of Twitter v Taameneh, relatives of the victim of a 2017 terror attack allegedly carried out by IS accused social media companies of being to blame for the rise in extremism. The case points to Google, as well as Twitter and Facebook.
What does Section 230 do?
Approved in 1996, Section 230 protects companies like YouTube, Twitter, and Facebook from being held liable for content by treating them as platforms rather than publishers. Civil liberties groups point out that the statute also offers valuable protections for free speech by giving technology platforms the right to host a variety of information without undue censorship.
In this case, the supreme court is being asked to determine whether the immunity granted by Section 230 also extends to platforms when they not only host content but also make “specific information recommendations.” The results of the case will be closely watched, said Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights.
“What is at stake here are the rules for free speech on the Internet,” he said. “This case could help determine whether major social media platforms continue to provide venues for free expression of all kinds, from political debates to people posting their art to human rights activists telling the world what is wrong with their stories. countries”.
A crackdown on algorithmic recommendations would affect almost every social media platform. Most moved away from simple chronological feeds after Facebook in 2006 launched its Newsfeed, an algorithm-driven landing page that recommends content to users based on their online activity.
To control this technology is to alter the face of the Internet itself, Barrett said. “That’s what social networks do: they recommend content.”
What is the response to efforts to reform Section 230?
Holding tech companies to account for their recommendation system has become a rallying cry for Republican and Democratic lawmakers. Republicans say the platforms have suppressed conservative views, while Democrats say the platforms’ algorithms are amplifying hate speech and other harmful content.
The Section 230 debate has created a rare consensus across the political spectrum that change must be made, with even Facebook’s Mark Zuckerberg telling Congress that “it may make sense that there’s content accountability,” and that Facebook “would benefit from clearer guidance from elected officials.” Both Joe Biden and his predecessor Donald Trump have called for changes to the measure.
What can go wrong?
Despite the efforts of lawmakers, many businesses, academics, and human rights advocates have defended Section 230, saying changes to the measure could backfire and significantly alter the Internet as we know it.
Companies like Reddit, Twitter, Microsoft, and tech critics like the Electronic Frontier Foundation have archived letters to the court arguing that holding platforms accountable for algorithmic recommendations would have serious effects on free speech and internet content.
Free speech and digital rights activist Evan Greer says holding companies accountable for their recommender systems could “lead to a widespread crackdown on legitimate political, religious and other speech.”
“Section 230 is widely misunderstood by the general public,” said Greer, who is also a director of digital rights group Fight for the Future. “The truth is that Section 230 is a fundamental law for human rights and free speech globally, and more or less the only reason crucial information can still be found online about controversial issues like abortion, sexual health, military actions, police killings, figures accused of sexual misconduct, and more.”