The Supreme Court agreed to hear challenges to the broad liability protections enjoyed by Google, Facebook and other social media companies for the third party content posted on their platforms.
More specifically, the high court will examine whether tech platforms are protected when their algorithms recommend problematic content to users.
The family of Nohemi Gonzalez, who was killed in an ISIS attack in Paris in 2015, sued Google over the videos, alleging that they “aided and abetted” ISIS by allowing ISIS terrorist videos on YouTube as well as including them in users’ recommendations.
The Section 230 provision of the Communications Decency Act generally protects tech platforms from the way that they moderate third party content. That provision has become a target of lawmakers of both parties, with Republicans accusing tech giants of censoring conservative viewpoints and Democrats arguing that the companies have not done enough to root out misinformation.
In its brief, Google’s attorneys argued that it was protected by Section 230. They wrote in a brief that a
“search algorithm takes the information the user puts in the search box and displays the information most likely to be of interest. Likewise, YouTube takes user inputs—like previous videos watched—and displays thumbnail videos of potential interest.”
“This Court should not lightly adopt a reading of section 230 that would threaten the basic organizational
decisions of the modern internet,” Google’s attorneys wrote.
The high court also agreed to take another case involving another family’s claims that Twitter could be held liable under the Anti-Terrorism Act.