February 16, 2023:
In 2015, individuals affiliated with the terrorist group ISIS conducted a wave of violence and mass murder in Paris — killing 129 people. One of them was Nohemi Gonzalez, a 23-year-old American student who died after ISIS assailants opened fire on the café where she and her friends were eating dinner.
A little more than a year later, on New Year’s Day 2017, a gunman opened fire inside a nightclub in Istanbul, killing 39 people — including a Jordanian national named Nawras Alassaf who has several American relatives. ISIS also claimed responsibility for this act of mass murder.
In response to these horrific acts, Gonzalez’s and Alassaf’s families brought federal lawsuits pinning the blame for these attacks on some very unlikely defendants. In Gonzalez v. Google, Gonzalez’s survivors claim that the tech giant Google should compensate them for the loss of their loved one. In a separate suit, Twitter v. Taamneh, Alassaf’s relatives make similar claims against Google, Twitter, and Facebook.
The thrust of both lawsuits is that websites like Twitter, Facebook, or Google-owned YouTube are legally responsible for the two ISIS killings because ISIS was able to post recruitment videos and other content on these websites that were not immediately taken down. The plaintiffs in both suits rely on a federal law that allows “any national of the United States” who is injured by an act of international terrorism to sue anyone who “aids and abets, by knowingly providing substantial assistance” to anyone who commits “such an act of international terrorism.”
The stakes in Gonzalez and Twitter are enormous. And the possibility of serious disruption is fairly high. There are a number of entirely plausible legal arguments, which have been embraced by some of the leading minds on the lower federal courts, that endanger much of the modern-day internet’s ability to function.
It’s not immediately clear that these tech companies are capable of sniffing out everyone associated with ISIS who uses their websites — although they claim to try to track down at least some ISIS members. Twitter, for example, says that it has “terminated over 1.7 million accounts” for violating its policies forbidding content promoting terrorism or other illegal activities.
But if the Court decides they should be legally responsible for removing every last bit of content from terrorists, that opens them up to massive liability. Federal antiterrorism law provides that a plaintiff who successfully shows that a company knowingly provided “substantial assistance” to a terrorist act “shall recover threefold the damages he or she sustains and the cost of the suit.” So even an enormous company like Google could face the kind of liability that could endanger the entire company if these lawsuits prevail.
A second possibility is that these companies, faced with such extraordinary liability, would instead choose to censor millions of peaceful social media users in order to make sure that no terrorism-related content slips through. As a group of civil liberties organizations led by the Center for Democracy and Technology warn in an amicus brief, an overbroad reading of federal antiterrorism law “would effectively require platforms to sharply limit the content they allow users to post, lest courts find they failed to take sufficiently ‘meaningful steps’ against speech later deemed beneficial to an organization labeled ‘terrorist.’”
And then there’s a third possibility: What if a company like Google, which may be the most sophisticated data-gathering institution that has ever existed, is actually capable of building an algorithm that can sniff out users who are involved in illegal activity? Such technology might allow tech companies to find ISIS members and kick them off their platforms. But, once such technology exists, it’s not hard to imagine how authoritarian world leaders would try to commandeer it.
Imagine a world, for example, where India’s Hindu nationalist prime minister Narendra Modi can require Google to turn such a surveillance apparatus against peaceful Muslim political activists as a condition of doing business in India.
And there’s also one other reason to gaze upon the Gonzalez and Twitter cases with alarm. Both cases implicate Section 230 of the Communications Decency Act of 1996, arguably the most important statute in the internet’s entire history.
Section 230 prohibits lawsuits against websites that host content produced by third parties — so, for example, if I post a defamatory tweet that falsely accuses singer Harry Styles of leading a secretive, Illuminati-like cartel that seeks to overthrow the government of Ecuador, Styles can sue me for defamation but he cannot sue Twitter. Without these legal protections, it is unlikely that interactive websites like Facebook, YouTube, or Twitter could exist. (To be clear, I am emphatically not accusing Styles of leading such a cartel. Please don’t sue me, Harry.)
But Section 230 is also a very old law, written at a time when the internet looked very different than it does today. It plausibly can be read to allow a site like YouTube or Twitter to be sued if its algorithm surfaces content that is defamatory or worse.
There are very serious arguments that these algorithms, which, at least in some cases, can surface more and more extreme versions of the content users like to watch, eventually leading them to some very dark places, play a considerable role in radicalizing people on the fringes of society. In an ideal world, Congress would wrestle with the nuanced and complicated questions presented by these cases — such as whether we should tolerate more extremism as the price of universal access to innovation.
But the likelihood that the current Congress will be able to confront these questions in any serious way is, to put it mildly, not high. And that means that the Supreme Court will almost certainly move first, potentially stripping away the legal protections that companies like Google, Facebook, or Twitter need to remain viable businesses — or, worse, forcing these companies to engage in mass censorship or surveillance.
Indeed, one reason why the Gonzalez and Twitter cases are so disturbing is that they turn on older statutes and venerable legal doctrines that were not created with the modern-day internet in mind. There are very plausible, if by no means airtight, arguments that these outdated US laws really do impose massive liability on companies like Google for the actions of a mass murderer in Istanbul.
The question the Supreme Court is supposed to resolve in the Gonzalez case is whether Section 230 immunizes tech companies like Google or Facebook from liability if ISIS posts recruitment videos or other terrorism-promoting content to their websites — and then that content is presented to website users by the website’s algorithm. Before we can analyze this case, however, it is helpful to understand why Section 230 exists, and what it does.
Before the internet, companies that allow people to communicate with each other typically were not legally responsible for the things those people say to one another. If I call up my brother on the telephone and make a false and defamatory claim about Harry Styles, for example, Styles may be able to sue me for slander. But he couldn’t sue the phone company.
The rule is different for newspapers, magazines, or other institutions that carefully curate which content they publish. If I publish the same defamatory claim on Vox, Styles may sue Vox Media for libel.
Much of the internet, however, exists in a gray zone between telephone companies, which do not screen the content of people’s calls, and curated media like a magazine or newspaper. Websites like YouTube or Facebook typically have terms of service that prohibit certain kinds of content, such as content promoting terrorism. And they sometimes ban or suspend certain users, including former President Donald Trump, who violate these policies. But they also don’t exercise anywhere near the level of control that a newspaper or magazine exercises over its content.
This uncertainty about how to classify interactive websites came to a head after a 1995 New York state court decision ruled that Prodigy, an early online discussion website, was legally responsible for anything anyone posted on its “bulletin boards” because it conducted some content moderation.
Which brings us to Section 230. Congress enacted this law to provide a liability shield to websites that publish content by the general public, and that also employ moderators or algorithms to remove offensive or otherwise undesirable content.
Broadly speaking, Section 230 does two things. First, it provides that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This means that if a website like YouTube or Facebook hosts content produced by third parties, it won’t be held legally responsible for that content in the same way that a newspaper is responsible for any article published in its pages.
Second, Section 230 allows online forums to keep their lawsuit immunity even if they “restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” This allows these websites to delete content that is offensive (such as racial slurs or pornography), that is dangerous (such as content promoting terrorism), or that is even just annoying (such as a bulletin board user who continuously posts the word “BABABOOEY” to disrupt an ongoing conversation) without opening the website up to liability.
Without these two protections, it is very unlikely that the modern-day internet would exist. It simply is not possible for a social media site with hundreds of millions of users to screen every single piece of content posted to those websites to make sure that it is not defamatory — or otherwise illegal. As the investigative journalism site ProPublica once put it, with only a mild amount of hyperbole, the provision of Section 230 protecting interactive websites from liability is the “twenty-six words [that] created the internet.”
The gist of the plaintiffs’ arguments in Gonzalez is that a website like YouTube or Facebook is not protected by Section 230 if it “affirmatively recommends other party materials,” regardless of whether those recommendations are made by a human or by a computer algorithm.
Thus, under this theory, while Section 230 prohibits Google from being sued simply because YouTube hosts an ISIS recruitment video, its Section 230 protections evaporate the minute that YouTube’s algorithm recommends such a video to users.
The potential implications of this legal theory are fairly breathtaking, as websites like Twitter, YouTube, and Facebook all rely on algorithms to help their users sort through the torrent of information on those websites. Google’s search engine, moreover, is basically just one big recommendation algorithm that decides which links are relevant to a user’s query, and which order to list those links in.
Thus, if Google loses its Section 230 protections because it uses algorithms to recommend content to users, one of the most important backbones of the internet could face ruinous liability. If a news outlet that is completely unaffiliated with Google publishes a defamatory article, and Google’s search algorithm surfaces that article to one of Google’s users, Google could potentially be liable for defamation.
And yet, the question of whether Section 230 applies to websites that use algorithms to sort through content is genuinely unclear, and has divided lower court judges who typically approach the law in similar ways.
In the Gonzalez case itself, a divided panel of the United States Court of Appeals for the Ninth Circuit concluded that algorithms like the one YouTube uses to display content are protected by Section 230. Among other things, the majority opinion by Judge Morgan Christen, an Obama appointee, argued that websites necessarily must make decisions that elevate some content while rendering other content less visible. Quoting from a similar Second Circuit case, Christen explained that “websites ‘have always decided … where on their sites … particular third-party content should reside and to whom it should be shown.’”
Meanwhile, the leading criticism of Judge Christen’s reading of Section 230 was offered by the late Judge Robert Katzmann, a highly regarded Clinton appointee to the Second Circuit. Dissenting in Force v. Facebook (2019), Katzmann pointed to the fact that Section 230 only prohibits courts from treating an online forum “as the publisher” of illegal content posted by one of its users.
Facebook’s algorithms do “more than just publishing content,” Katzmann argued. Their function is “proactively creating networks of people” by suggesting individuals and groups that the user should attend to or follow. That goes beyond publishing, and therefore, according to Katzmann, falls outside of Section 230’s protections.
The likely reason for this confusion about what Section 230 means is that the law was enacted nearly three decades ago, when the internet as a mass consumer phenomenon was still in its infancy. Congress did not anticipate the role that algorithms would play in the modern-day internet, so it did not write a statute that answers the question of whether algorithms that recommend content to website users shatter Section 230 immunity with clarity. Both Christen and Katzmann offer plausible readings of the statute.
In an ideal world, Congress would step in to write a new law that strikes a balance between ensuring that essential websites like Google can function, while potentially including some additional safeguards against the promotion of illegal content. But the House of Representatives just spent an entire week trying to figure out how to elect a speaker, so the likelihood that the current, highly dysfunctional Congress will perform such a nuanced and highly technical task is vanishingly small.
And that means that the question of whether much of the internet will continue to function will turn on how nine lawyers in black robes decide to read Section 230.
Let’s assume for a moment that the Supreme Court accepts the Gonzalez plaintiffs’ interpretation of Section 230, and thus Google, Twitter, and Facebook lose their immunity from lawsuits claiming that they are liable for the ISIS attacks in Paris and Istanbul. To prevail, the plaintiffs in both Gonzalez and Twitter would still need to prove that these websites violated federal antiterrorism law, which makes it illegal to “knowingly” provide “substantial assistance” to “an act of international terrorism.”
The Supreme Court will consider what this statute means when it hears the Twitter case. But this statute is, to say the least, exceedingly vague. Just how much “assistance” must someone provide to a terroristic plot before that assistance becomes “substantial?” Is it enough for the Twitter plaintiffs to show that a tech company provided generalized assistance to ISIS, such as by operating a website where ISIS was able to post content? Or do those plaintiffs have to show that, by enabling ISIS to post this content online, these tech companies specifically provided assistance to the Istanbul attack itself?
The Twitter plaintiffs’ theory of what constitutes “substantial assistance” is quite broad. They do not allege that Google, Facebook, or Twitter specifically set out to assist the Istanbul attack itself. Rather, they argue that these websites’ algorithms “recommended and disseminated a large volume of written and video terrorist material created by ISIS,” and that providing such a forum for ISIS content was key to “ISIS’s efforts to recruit terrorists, raise money, and terrorize the public.”
Perhaps that is true, but it’s worth noting that Twitter, Facebook, or Google are not accused of providing any special assistance to ISIS. Indeed, all three companies say that they have policies prohibiting content that seeks to promote terrorism, although ISIS was sometimes able to thwart these policies. Rather, as the Biden administration says in an amicus brief urging the justices to rule in favor of the social media companies, the Twitter plaintiffs “allege that defendants knew that ISIS and its affiliates used defendants’ widely available social media platforms, in common with millions, if not billions, of other people around the world, and that defendants failed to actively monitor for and stop such use.”
If a company can be held liable for a terrorist organization’s actions simply because it allowed that organization’s members to use its products on the same terms as any other consumer, then the implications could be astonishing.
Suppose, for example, that Verizon, the cell phone company, knows that a terrorist organization sometimes uses Verizon’s cellular network because the government occasionally approaches Verizon with wiretap requests. Under the Twitter plaintiffs’ reading of the antiterrorism statute, Verizon could potentially be held liable for terrorist attacks committed by this organization unless it takes affirmative steps to prevent that organization from using Verizon’s phones.
Faced with the threat of such awesome liability, these companies would likely implement policies that would harm millions of non-terrorist consumers. As the civil liberties groups warn in their amicus brief, media companies are likely to “take extreme and speech-chilling steps to insulate themselves from potential liability,” cutting off communications by all kinds of peaceful and law-abiding individuals.
Or, worse, tech companies might try to implement a kind of panopticon, whereby every phone conversation, every email, every social media post, and every direct message is monitored by an algorithm intended to sniff out terrorist sympathizers — and then deny service to anyone who is flagged by this algorithm. And once such a surveillance network is built, authoritarian rulers across the globe are likely to pressure these tech companies to use that network to target political dissidents and other peaceful actors.
Despite all of these concerns, the likely reason why the Twitter case had enough legs to make it to the Supreme Court is that the relevant antiterrorism law is quite vague, and court decisions do little to clarify the law. That said, one particularly important federal court decision provides the justices with an off-ramp they can use to dispose of this case without making Google responsible for every evil act committed by ISIS.
Federal law states that, in determining whether an organization provided substantial assistance to an act of international terrorism, courts should look at “the decision of the United States Court of Appeals for the District of Columbia in Halberstam v. Welch,” a 1983 decision that, in Congress’s opinion, “provides the proper legal framework for how such liability should function.”
The facts of Halberstam could not possibly be more dissimilar than the allegations against Google, Twitter, and Facebook. The case concerned an unmarried couple, Linda Hamilton and Bernard Welch, who lived together and who grew fantastically rich due to Welch’s five-year campaign of burglaries. Welch would frequently break into people’s homes, steal items made of precious metals, melt them into bars using a smelting furnace installed in the couple’s garage, and then sell the precious metals. Hamilton, meanwhile, did much of the paperwork and bookkeeping for this operation, but did not actually participate in the break-ins.
The court in Halberstam concluded that Hamilton provided “substantial assistance” to Welch’s criminal activities, and thus could be held liable to his victims. In so holding, the DC Circuit also surveyed several other cases where courts concluded that an individual could be held liable because they provided substantial assistance to the illegal actions of another person.
In some of these cases, a third-party egged on an individual who was engaged in illegal activity — such as one case where a bystander yelled at an assailant who was beating another person to “kill him” and “hit him more.” In another case, a student was injured by a group of students who were throwing erasers at each other in a classroom. The court held that a student who threw no erasers, but who “had only aided the throwers by retrieving and handing erasers to them” was legally responsible for this injury too.
In yet another case, four boys broke into a church to steal soft drinks. During the break-in, two of the boys carried torches that started a fire that damaged the church. The court held that a third boy, who participated in the break-in but did not carry a torch, could still be held liable for the fire.
One factor that unifies all of these cases is that the person who provided “substantial assistance” to an illegal activity had some special relationship with the perpetrator of that activity that went beyond providing a service to the public at large. Hamilton provided clerical services to Welch that she did not provide to the general public. A bystander egged on a single assailant. A student handed erasers to specific classmates. Four boys decided to work together to burglarize a church.
The Supreme Court, in other words, could seize upon this unifying thread among these cases to rule that, in order to provide “substantial assistance” to a terrorist act, a company must have some special relationship with that organization that goes beyond providing it a product on the same terms that the product is available to any other consumer. This is more or less the same approach that the Biden administration urges the Court to adopt in its amicus brief.
Again, the most likely reason why this case is before the Supreme Court is because previous court decisions do not adequately define what it means to provide “substantial assistance” to a terrorist act, so neither party can produce a slam-dunk case which definitely tells the justices to rule in their favor. But Halberstam and related cases can very plausibly be read to require companies to do more than provide a product to the general public before they can be held responsible for the murderous actions of a terrorist group.
Given potentially disastrous consequences for all internet commerce if the Court rules otherwise, that’s as good a reason as any to read this antiterrorism statute narrowly. That would, at least, neutralize one threat to the modern-day internet — although the Court could still create considerable chaos by reading Section 230 narrowly in the Gonzalez case.
In closing this long and complicated analysis of two devilishly difficult Supreme Court cases, I want to acknowledge the very real evidence that the algorithms social media websites use to surface content to their users can cause significant harm. As sociologist and Columbia professor Zeynep Tufekci wrote in 2018, YouTube “may be one of the most powerful radicalizing instruments of the 21st century” because of its algorithms’ propensity to serve up more and more extreme versions of the content its users decide to watch. A casual runner who starts off watching videos about jogging may be directed to videos about ultramarathons. Meanwhile, someone watching Trump rallies may be pointed to “white supremacist rants.”
If the United States had a more functional Congress, then there may very well be legitimate reasons for lawmakers to think about amending Section 230 or the antiterrorism law at the heart of the Twitter case to quell this kind of radicalization — though obviously such a law would need to comply with the First Amendment.
But the likelihood that nine lawyers in black robes, none of whom have any particular expertise on tech policy, will find the solution to this vexing problem in vague statutes that were not written with the modern-day internet in mind is small, to say the least. It is much more likely that, if they rule against the social media defendants in this case, the justices will suppress internet commerce across the globe, that they will diminish much of the internet’s ability to function, and that they may do something even worse — effectively forcing companies like Google to become engines of censorship or mass surveillance.
Indeed, if the Court interprets Section 230 too narrowly, or if it reads the antiterrorism statute too broadly, that could effectively impose the death penalty on many websites that make up the backbone of the internet. That would be a monumental decision, and it should come from a body with more democratic legitimacy than the nine unelected people who make up the Supreme Court.