Marjorie Taylor Green Twitter ban and Big Tech censorship

January 4, 2022:

It’s not exactly surprising that Rep. Marjorie Taylor Greene (R-GA) — a politician who’s built her career by promoting viral conspiracy theories like QAnon — was suspended on Sunday from Twitter and on Monday from Facebook for posting misinformation about Covid-19 vaccines.

It’s equally unsurprising that Greene and her supporters reacted by accusing Twitter and Facebook of censoring her for her political beliefs, rather than for making repeated false statements denying the harms of Covid-19 and the effectiveness of vaccines.

But Marjorie Taylor Greene’s suspension from social media resurfaces a question ahead of the anniversary of the January 6 Capitol riot and the 2022 midterm election cycle: How will social media companies deal with the oncoming onslaught of contentious speech from elected officials and political candidates running for office this year?

“I haven’t seen or heard anything about how [social media companies] are planning on handling it,” said Katie Harbath, a director of public policy at Facebook from 2011 to March of last year who now leads a tech policy consultancy firm. “From what I’ve seen, they wait until something’s at their front door that they have to decide on. I’m just really worried.”

In 2019 and 2020, the world was caught up in a fierce debate about whether or not tech companies should intervene when politicians like former President Donald Trump used social media to broadcast harmful misinformation or encourage violence. That debate peaked around the January 6 riot and Trump’s subsequent ban from the sites. Prior to that, Facebook and Twitter had let Trump and other world leaders get away with breaking their rules because their speech was largely deemed “newsworthy” — but they retreated from that position with the Trump ban. It was a controversial but justifiable move in the view of Facebook and Twitter, given the imminent violent threat to US democracy.

But for the past several months, there hasn’t been much movement on the topic of social media platforms’ approach to moderating politicians’ speech. Facebook kicked the can down the road until 2023 about whether or not Trump will be allowed back on its platform. Twitter is still in the process of crafting a new policy about how it should police world leaders, which it says it expects to roll out in the coming months.

Now, Greene’s situation is a reminder that whether or not social media companies are ready for it, the debate about how politicians should be allowed to use social media is reigniting. And it’s happening in a political climate that’s highly polarized and conspiracy-theory-driven.

Greene has long tested the limits of social media’s terms of engagement

Much like her political ally, Trump, Greene has built a career around making bombastic, inflammatory, and false statements on social media.

Before her recent suspension, Greene had already accumulated four “strikes” from Twitter for posting Covid-19 misinformation, and a 12-hour suspension in July. Her fifth strike, which triggered her permanent suspension, was a post including the false statement that “extremely high amounts of Covid vaccine deaths are ignored.” Greene posted a similar message on Facebook, which responded with a 24-hour account suspension on Monday.

While Twitter permanently banned Greene’s personal account, she still has access to Twitter through her official congressional Twitter account that has nearly 400,000 followers. She’s now actively fundraising for “emergency contributions” to her political campaign to fight “Big Tech censorship.”

Greene, like some other far-right and conservative figures who have been banned from mainstream social media, has turned to the social media app Telegram — which has more lax content moderation and encrypted chat — to reach her followers. “Twitter is an enemy to America and can’t handle the truth,” Greene said in a post on Telegram in response to Twitter’s suspension. “That’s fine, I’ll show America we don’t need them and it’s time to defeat our enemies.”

On Monday, Republican House leader Kevin McCarthy (R-CA) made a public statement that didn’t reference Greene by name but seemed to be referring to her case, urging for a landmark internet law called Section 230 to be changed so that tech companies can be held legally liable for their content moderation decisions.

Today, under First Amendment law, companies like Facebook and Twitter are considered private actors that are well within their legal rights to ban whoever they want. That includes those like Greene who have repeatedly violated their stated terms of service.

But legality aside, there is widespread concern about how much influence private corporations like Facebook and Twitter should have in politics. Facebook and Twitter have shirked the responsibility to weigh in on political matters, with Facebook CEO Mark Zuckerberg saying that the company shouldn’t be an “arbiter of truth” and Twitter founder and former CEO Jack Dorsey making freedom of expression a central tenet of the company’s philosophy. But despite these companies’ reluctance to make judgment calls on political speech, the reality is that both companies continue to have to deal with these issues every day by virtue of having people discuss politics on their platform. And that opens them up to criticism and accusations of censorship.

“Private companies have so much power. There are only a few platforms — and Twitter and Facebook are two of them — that control so much of the public discourse,” said Gautam Hans, a professor at Vanderbilt Law School who specializes in First Amendment law and technology policy. “I think that makes all of us a little uncomfortable.”

Social media’s rules about political speech are still murky

In some ways, Greene’s case around breaking social media rules was clear-cut because it was about Covid-19, an issue that Facebook and Twitter have been stricter about moderating since the pandemic began in early 2020.

But when it comes to other topics like Trump’s “Big Lie” false narrative about the 2020 election being stolen from him, or whether the January 6 Capitol riot was justified, social media’s guidelines for what is and isn’t acceptable are a lot more ambiguous.

Around the time of the 2020 presidential election, for example, Twitter and Facebook increased their efforts to police voter misinformation. The companies regularly labeled or removed information that made false claims about voter fraud or the election being rigged.

But now, one year later, it’s unclear exactly how those standards might change, particularly as many Republican members of Congress and candidates continue to support “The Big Lie.”

In the time period immediately after the Capitol riot, social media platforms also employed urgent measures to try to minimize the glorification of the violence that occurred. Facebook, for example, issued an emergency policy to remove any praise of the storming of the Capitol, or calls to bring weapons to locations anywhere in the US.

Facebook did not respond to a question about whether those measures are still in place on the one-year anniversary of the event, when some 34 percent of Americans believe that violent action against the government is sometimes justified, according to recent polling.

Facebook Vice President of Content Policy Monika Bickert said on a November call that the company is “taking steps to combat election interference and misinformation while also working to help people vote,” but she provided few details about any potential new plans.

“We’re enforcing our policies against removing voter interference content and we’ll continue to refine our strategy to combat content that discusses the legitimacy of voting methods, like voter fraud claims,” Bickert said on that call. “And this is all building on our efforts during the US 2020 elections and we’ll have more to share as we get closer to next year’s elections.”

A company spokesperson for Twitter sent the following statement to Recode on Tuesday:

Our approach both before and after January 6 has been to take strong enforcement action against accounts and Tweets that incite violence or have the potential to lead to offline harm. Engagement and focus across government, civil society, and the private sector are also critical. We recognize that Twitter has an important role to play, and we’re committed to doing our part.

There’s a long way that Facebook and Twitter could go to make their policies on politicians’ speech more clear. But even then, the problem around the complicated boundaries of political speech won’t be entirely resolved.

“You can have all the clear rules and guidelines,” said Hans. “But fundamentally, there’s always some human discretion that comes into this, and that’s a little disconcerting.”

Source link