Supreme Court to decide if the government can seize control of YouTube and Twitter

February 15, 2024:

In mid-2021, about a year before he began his longstanding feud with the biggest employer in his state, Florida’s Republican Gov. Ron DeSantis signed legislation attempting to seize control of content moderation at major social media platforms such as YouTube, Facebook, or Twitter (now called X by Elon Musk). A few months later, Texas Gov. Greg Abbott, also a Republican, signed similar legislation in his state.

Both laws are almost comically unconstitutional — the First Amendment does not permit the government to order media companies to publish content they do not wish to publish — and neither law is currently in effect. A federal appeals court halted the key provisions of Florida’s law in 2022, and the Supreme Court temporarily blocked Texas’s law shortly thereafter (though the justices, somewhat ominously, split 5-4 in this later case).

Nevertheless, the justices have not yet weighed in on whether these two unconstitutional laws must be permanently blocked, and that question is now before the Court in a pair of cases known as Moody v. NetChoice and NetChoice v. Paxton.

The stakes in both cases are quite high, and the Supreme Court’s decision is likely to reveal where each one of the Republican justices falls on the GOP’s internal conflict between old-school free market capitalists and a newer generation that is eager to pick cultural fights with business.

Proponents of the two laws have not hidden that they were enacted entirely because Republican lawmakers in Texas and Florida believed that social media websites must do more to elevate conservative voices. As DeSantis said of his state’s law, it exists to fight supposedly “biased silencing” of “our freedom of speech as conservatives … by the ‘big tech’ oligarchs in Silicon Valley.”

So, if the Supreme Court were to uphold these laws, it would give Republican policymakers sweeping and unprecedented ability to control what many American voters read about our elections and our political debates. More broadly, the NetChoice cases are a test of how this Supreme Court, with its 6-3 Republican supermajority, views free market capitalism in an era when many of the justices’ fellow partisans view corporate America as the enemy in a culture war.

DeSantis, in particular, is one of the GOP’s leading voices for a kind of reactionary anti-capitalism that is eager to use the government’s authority to suppress voices that disagree with conservative orthodoxy — often when those voices are associated with big businesses — while elevating opinions DeSantis finds more congenial.

DeSantis famously signed legislation retaliating against the Walt Disney Corporation after Disney denounced Florida’s “Don’t Say Gay” law — a law that is itself an unconstitutional attempt to suppress speech. He’s also signed legislation seeking to limit investment strategies DeSantis views as too “woke.” DeSantis said he endorsed former President Donald Trump bid to return to the White House because Nikki Haley, Trump’s final rival for the GOP presidential nomination, embodies a “repackaged form of warmed-over corporatism.”

This anti-capitalist Republicanism, moreover, is hardly limited to DeSantis. Among other things, it’s penetrated deep into the Federalist Society — the powerful legal organization that plays an enormous role in selecting Republican appointees to the federal bench. During the Biden administration, the Federalist Society’s annual conventions have featured an array of paranoid speakers making grandiose claims about corporate America, such as “massive corporations are pursuing a common and mutually agreed upon agenda to destroy American freedom.”

The social media laws at issue in the two NetChoice cases place the GOP’s internal conflict between free market traditionalists and MAGA-aligned culture warriors in stark relief. Again, these laws seek to use the power of the government to seize control of private media companies’ editorial decisions. That’s not simply an attack on the “marketplace of ideas” protected by the First Amendment; it’s a direct attack on the market itself.

Social media companies moderate content and ban users because they want to make money

Before we dive into the details of the social media laws at issue in the NetChoice cases, it’s helpful to understand why social media companies often delete content they deem to be offensive, dangerous, or simply unwelcome on their site.

The premise underlying both Texas’s and Florida’s laws, which put strict limits on these companies’ power to remove such speech or ban users who engage in it, is that there is, in Abbott’s words, a “dangerous movement by social media companies to silence conservative viewpoints and ideas,” and that the government must step in to quell this supposed movement.

In reality, there is little evidence that companies like Facebook or Google (which owns YouTube) are engaged in any kind of systematic effort to suppress conservative content — right-leaning posts tend to perform quite well on social media. But it is true that some viewpoints associated with the political right, such as support for the January 6 insurrection, tend to be frowned upon by many social media moderators.

The best explanation for such content moderation, however, is not that Mark Zuckerberg is secretly determined to elect Democrats by quieting conservative voices. It’s that social media companies depend on advertisers to make money, and those advertisers demand “brand safety” — meaning that they don’t want to advertise on a site that will list their product next to a swastika, a rant against Covid-19 vaccines, or some other content that is likely to offend many potential customers.

As the Verge’s Nilay Patel colorfully explained, running a profitable social media company “means you have to ban racism, sexism, transphobia, and all kinds of other speech that is totally legal in the United States but reveals people to be total assholes.” These sorts of assholes also apparently have many friends in the Florida and Texas state legislatures.

Patel’s thesis was recently tested in a very unusual real-world experiment. After billionaire Elon Musk purchased Twitter, he declared that the company would move in a more “free speech absolutist” direction, and restored the accounts of thousands of users who were suspended or banned by Twitter’s previous management. That included the accounts of several prominent neo-Nazis and QAnon conspiracy theorists, as well as Trump’s infamous Twitter account.

This move away from moderating far-right content proved disastrous for Twitter. According to an estimate by the data and analytics company Similarweb, which was released last fall, “in September, global web traffic to twitter.com was down -14%, year-over-year, and traffic to the ads.twitter.com portal for advertisers was down -16.5%.” Other reporting shows advertisers fleeing the site.

In fairness, Musk’s management of Twitter has been so comprehensively awful that it is hard to attribute the site’s falling fortunes solely to the reactivation of many previously banned right-wing users. Among other things, Musk’s Twitter tweaked the site’s algorithms in ways that elevated low-quality content produced by people who signed up for Twitter’s new $8-a-month subscription service. And he’s retaliated against users who’ve mocked him online.

Nevertheless, all of these examples of Musk’s poor management support the thesis that a social media company’s profitability rises and falls based on how well the company moderates its content to attract both high-quality users and advertisers. And that means that companies that hope to remain profitable will ban some users who share some opinions that are common within the Republican Party — for reasons that have nothing to do with politics and everything to do with capitalism.

Texas and Florida’s laws are ham-handed, incompetently drafted, and almost laughably unconstitutional

Both Florida and Texas frame their laws as anti-discrimination regimes intended to prevent social media companies from treating certain opinions differently than others. The core provision of Texas’s law prohibits the major social media companies from moderating content based on “the viewpoint of the user or another person” or on “the viewpoint represented in the user’s expression or another person’s expression.” Florida’s law, meanwhile, has a similar provision requiring the biggest social media sites to moderate content “in a consistent manner among its users on the platform.”

If taken seriously, however, a ban on viewpoint discrimination wouldn’t just make moderation of offensive political content impossible. It would effectively forbid major social media platforms from taking the most basic steps to sanction rude behavior that is likely to drive away users.

Suppose, for example, that a woman’s stalker ex-boyfriend harasses her on Twitter, creating multiple accounts that bombard her with tweets calling her “ugly” and “stupid.” Under Texas’s law (and most likely under Florida’s more vaguely worded law), Twitter may not ban this stalker, or otherwise take action against his online harassment, unless it also takes identical action against someone who labels the same woman “beautiful” or “intelligent.” Only banning users who express negative opinions about this woman would amount to viewpoint discrimination.

Now consider how these provisions will operate in the political context. Facebook cannot ban someone who calls for a MAGA revolution that overthrows the United States government and installs the Trump family as an absolute hereditary monarchy, unless it also bans people who support the US Constitution. Twitter cannot delete tweets claiming someone can cure Covid-19 by injecting themselves with bleach, unless it also deletes tweets by doctors and public health officials warning people not to do this. YouTube cannot ban a literal Nazi who posts videos calling for the extermination of all Jewish people, unless it also bans people who express the opposite viewpoint — that is, the view that Jews should not be exterminated.

In case there is any doubt, the First Amendment does not allow the government to force media outlets to publish Nazis, quack medical theories, monarchal revolutionaries, stalkers, or anyone else, for that matter. To understand why, it’s helpful to understand four principles of First Amendment law.

First, this amendment protects against both government censorship and government actions that attempt to force someone to speak against their will. As the Supreme Court said in Rumsfeld v. Forum for Academic and Institutional Rights (2006), “freedom of speech prohibits the government from telling people what they must say.”

Second, the First Amendment protects corporations. This idea became controversial after the Court’s decision in Citizens United v. FEC (2010) held that the First Amendment permits corporations to spend unlimited sums of money to influence elections, but it’s impossible to imagine free speech or a free press enduring unless the First Amendment extends to corporate speech. After all, media companies like Vox Media, the New York Times, and the Washington Post are all corporations.

Third, the First Amendment protects the right of traditional media companies such as newspapers to choose what they want to print. As the Court held in Miami Herald v. Tornillo (1974), a news outlet’s “choice of material to go into a newspaper” is subject only to the paper’s “editorial control and judgment,” and “it has yet to be demonstrated how governmental regulation of this crucial process can be exercised consistent with First Amendment guarantees of a free press.”

Finally, the same rules apply to internet-based media as apply to traditional outlets. Though the Supreme Court acknowledged in Reno v. ACLU (1997) that online media is distinct from other mediums because it “can hardly be considered a ‘scarce’ expressive commodity” — that is, unlike a newspaper or magazine, there is no physical limit on how much content can be published on a website. Nevertheless, Reno concluded that “our cases provide no basis for qualifying the level of First Amendment scrutiny that should be applied to this medium.”

Together, these four principles establish that YouTube, and not the government, gets to decide what videos will appear on YouTube — just as CNN gets to decide which guests appear on its network, and which news stories it will emphasize, without government coercion.

This conclusion, moreover, is bolstered by the Supreme Court’s decision last June in 303 Creative v. Elenis, which held that an anti-LGBTQ website designer could refuse to do business with same-sex couples — even if her state’s law forbids such discrimination — because “the government may not compel a person to speak its own preferred messages.”

If the Supreme Court were to hold that religious conservatives have a First Amendment right to defy anti-discrimination laws, at least in the context of online speech, but that Republican states can forbid major media outlets from “discriminating” against insurrectionists and anti-vaxxers — well, it’s hard to see how anyone could take this Court seriously as a nonpartisan institution after such a decision.

Someone has to have the final word on what content appears online, and this authority must not be given to the government

Having laid out these constitutional principles, it’s important to acknowledge that social media companies do not always make responsible decisions about what content should appear online. Just ask the Rohingya people.

But the fact that social media platforms sometimes make bad decisions does not mean that we should trust the government to override those decisions.

Ordinarily, we trust government officials to regulate business because they are more likely to act in the public’s interest than executives at a for-profit company. EPA regulators do not always reach the right conclusions, but they are more likely to strike the right balance between economic growth and environmental protection than the CEO of Exxon.

But this dynamic is reversed in the free speech context — which is why the First Amendment exists in the first place. If Texas Republicans are allowed to regulate political speech, they will likely elevate speech that benefits Republicans and suppress speech that elevates Democrats.

Ultimately, someone needs to decide what content will appear online. And leaving these decisions to the free market means that they won’t be made by the most self-interested people in the world: elected officials who are more likely to hold onto their jobs if they can manipulate what information is seen by voters.

Nor is it a solution to give this power to unelected officials. Federal judges, and especially Supreme Court justices, are political appointees who are typically vetted by the White House to ensure that they support the incumbent president’s political goals. Government agencies are also normally run by political appointees chosen, at least in part, because they are loyal Democrats or Republicans.

So there’s no government agency that can be trusted to regulate speech in a politically neutral way. The only choice is to either let the social media companies run their own platforms or to give that power to the government. And, in the NetChoice cases, giving that power to the government means placing control over what information voters will see in the hands of men like Ron DeSantis and Greg Abbott.

It is right to be uncomfortable with Mark Zuckerberg or, for God’s sake, Elon Musk wielding the kind of power they wield over public discourse. But few things are more dangerous to democracy than a government that can override editorial decisions made by a free press.

More broadly, the NetChoice cases will show us which members of the Supreme Court’s six-justice Republican majority still believe in traditional Republican ideas about the free market and capitalism, and which of them agree with DeSantis that the power of the government should be used to reshape our culture — and that corporations that do not align with the rightward side of a culture war should be forced to do so against their will.

Source link