X

Twitter Won't Distinguish Between Hate Speech And Those Who Oppose It

Tolerance and intolerance are analogous to matter and anti-matter: They cannot exist in the same place, but Twitter has the solution all wrong.

Having reached Day 8 (and counting) of my suspension from Twitter for posting the image of Klan-hooded stars from the cover of my book Alt-America, I can at least report one development: Yes, my life has gotten materially better without it.

It’s true that I very much miss the stream of news that my feed provided, one I had tried to tailor to my peculiar needs as a journalist covering the radical Right. I certainly miss the instant communication I have had with my colleagues. None of that is easy to replace.

But the low-level stress of being on Twitter—dealing with adversarial conversations, contending with the ugliness that erupts whenever trolls come rolling around—is one of those things, like regular driving during rush hour, that you most appreciate when you stop doing it.

That said, for many obvious reasons I am working to end the suspension, but I am adamant about not removing an image that isn’t hateful, but is pure political commentary (of the admittedly sharp-elbowed kind). And I’m fighting it because the reductio ad absurdum of Twitter’s reasoning for the suspension would leave not just me, but any reporter who works to monitor and expose the activities of far-right extremists, exposed to the constant threat of being banned simply for doing our mainstream jobs as journalists.

I did hear back from Twitter. This is the statement I was given by a company spokesperson:

We don't allow hateful symbols in avatars or header images in order to protect users from unintentionally being exposed to abusive imagery that can be upsetting. We know that some people may use these symbols to confront a hateful ideology, but that's not always obvious at first glance. In this case, additional context in the account profile clarifies the intent, but that isn't always visible in the product. So, for example, someone viewing the header on mobile wouldn't have the extra context, and would only see the hateful image.

The problems with this rationale are twofold: First, it accepts at face value that the image in my profile is “hateful,” and treats context as meaningless, so that Twitter itself makes no distinction between hate speech and speech intended to counter the hateful kind; second, it leaves the door wide open for people who propagate hate on their Twitter pages to operate freely while they appropriate supposedly benign images—think of all those Pepe and “Groyper” alt-right avatars out there—for the purpose of spreading white-nationalist ideology.

In other words, Twitter’s algorithm has the net effect of privileging alt-right extremism—which specializes in this kind of appropriation and “ironic” use of disguised hate symbols, ranging from the Kek banner to the “OK” sign. And it punishes the serious work of combating white supremacism, which supposedly is what Twitter had in mind when it announced its intention to investigate whether it should actively de-platform hate groups and far-right extremists.

Twitter’s algorithm, for example, could not detect that the account that went by the nom de plume “Joseph Paul Franklin,” and used Franklin’s portrait as its avatar, was in fact paying homage to one of the worst domestic terrorists in our history: A man who traveled the country for the better part of four years (1977-80) with a sniper rifle, undertaking a running murder rampage that targeted mixed-race couples and civil-rights leaders, including Vernon Jordan, who he shot but fortunately did not kill. (Franklin was executed by lethal injection in 2013.)

Claiming that identity, of course, manifests an extraordinary level of hatefulness that also suggests an intent to carry out a lethal threat. But Twitter’s algorithm isn’t designed to recognize any of that. It’s stupid, like all technological attempts to replace sound human judgment. (They did give this ID the boot after I reported it.)

As you can imagine, Twitter itself was full of right-wing respondents who were utterly delighted at the news of my suspension. Contrarian con artists like Tim Pool and Andy Ngo, who loudly declaim in other cases about “freeze peach” or something like that, were quite happy with it. Most of them saw it as just desserts, mainly because I have made no bones about my view that de-platforming (a la Milo Yiannopoulos) works, especially when discussing how, in an open democratic society, it’s possible to reel in the kind of speech that alt-right haters specialize in spreading—but in a nation like ours with a First Amendment, it’s tricky.

This was the main focus of Seattle’s homegrown contrarian grifter, Katie Herzog, who (typically) sounded a few sympathetic notes on my behalf before diving into that large vat of Intellectual Dark Web-strength Schadenfreude:

The irony here is that Neiwert himself is a proponent of social media bans. He told the Daily Beast that he thinks his account was targeted by "Twitter trolls" in retaliation for reporting their accounts, and while there is obviously a difference between posting hateful content and commenting on hateful content, this was absolutely bound to happen when tech companies moderate users' speech. There are an estimated 500 million tweets sent every day. It would be pretty damn costly for human beings to moderate all that content, if it's even possible, and so companies like Twitter use artificial intelligence and algorithms to do it instead. Of course they get it wrong! Nuance isn't exactly a robot's strong suit. Wrongful suspensions and bans are exactly what's going to happen when we demand censorship via corporations.

It's true that, when it comes to hate speech, I remain a vocal advocate not just of private platforms’ ability to set standards for themselves but of their absolute need to do so both rigorously and effectively. My ban speaks to the former, but it also points the latter up quite vividly.

I am something of a curmudgeonly old newspaper editor who has long understood that free speech doesn’t guarantee anyone a platform, particularly not when the owners of those platforms can be held legally and fiducially responsible for any damages inflicted by the speech of the people using that platform. Twitter, Facebook, YouTube and all the other media platforms have every right to set standards for the speech published on their platforms—but they also have an obligation to enforce those standards both consistently and effectively.

Twitter, as we all know, is wildly inconsistent. It continues to platform David Duke, unquestionably the leading figure of white-nationalist and anti-Semitic hate-mongering in the United States over the past 50 years. It shouldn’t need a specific excuse to deny him access to their platform; his long and ugly history alone should suffice. (Maybe a viewing of BlackkKlansman would help jog Twitter executives’ memories.)

Duke is only the most prominent and egregious example. There are literally dozens of leading white nationalists who use Twitter blithely and without interruption, including Richard Spencer and Mike “Enoch” Peinovich, not to mention “alt-lite” figures like Mike Cernovich and Jack Posobiec. Yet despite multiple instances of clearly violating both the letter of Twitter’s terms of service, not to mention the spirit of its oft-stated intention to keep hate speech and threatening behavior off their website.

At least Twitter, YouTube, and Facebook have effectively removed Alex Jones and his Infowars operation, the primary fount of right-wing conspiracist disinformation on the planet. And yes, if Herzog could access my Twitter account, she’d be able to find at least this instance of me rooting for his complete removal from these platforms. But then, the final removal for which I am hoping is a more thoroughgoing one, since it’s the traditional route of being utterly bankrupted by people who have been materially harmed by his speech—namely, the parents of the Sandy Hook children who have been tortured by Jones’s flying monkeys and their insistence that the massacre was a “false flag” and those children never really existed.

The Jones/Sandy Hook case, you see, is the large hurtling train that all these companies have seen steaming toward them—namely, their own culpability and financial liability in these cases. Facebook faces significant questions about their responsibilities since the Christchurch, New Zealand, massacre in March—likely a product of the perpetrator’s use of its livestreaming function. It’s only a matter of time before one of these yahoos commits an act that will bring massive lawsuits down on these companies’ heads.

Herzog is wrong, of course, that humans cannot identify hate speech when algorithms can’t. They actually can—just ask anyone who works for the SPLC or ADL. And it’s easy to toss the 500-million-tweets-a-day figure out there, but it obscures the more realistic situation: Obviously algorithms can be used to flag violations in that situation. But how many suspensions due to hate speech actually happen daily? That number is probably in the hundreds, and could be managed by hiring people trained to the task to handle it.

Twitter, essentially, is relying on algorithms to perform tasks that require human judgment, something algorithms are incapable of doing. A number of people assumed, for instance, that once Twitter had actually taken a look at my “violation,” it would realize it had made a mistake and restore my account. That hasn’t happened, and doesn’t look like it’s going to.

The larger point, however, is that none of this has to do with free speech or our rights—at least, not in the way the contrarian grifters like Pool and Herzog seem to think it has. The First Amendment guarantees that the government can censor people, and it’s simply not in play here.

If we want to discuss free-speech rights more broadly, then let’s—because the efforts to remove fascist speech from these platforms in fact are aimed at protecting the open society that makes the free exchange of ideas possible in the first place.

As the liability issue suggests, there’s never been any kind of legal obligation for privately owned media and publishing platforms to provide access to those platforms to anyone who wants onto them. Newspapers have always had editors who select the letters and op-eds that appear on their pages. And there’s never been any expectation that TV stations should have to broadcast the mental masturbation of a mass of conspiracy theorists. Those platforms, however, have always had human beings hired for their knowledge and sound judgment to make those decisions—and that’s all the so-called “censors” are calling for now.

The rightists, centrists, and contrarian “free speech absolutists” treat speech as though it is merely comprised of inert, essentially interchangeable content that has no effect on other speech. But fascist speech, particularly targeted hate speech, is in fact entirely aimed at effectively destroying the free-speech rights of its targets, both short-term and long-term.

It’s the age-old conundrum that advocates of an open society have always faced: If you are all about the broad tolerance of ideas, why can’t you tolerate people who promote intolerance?

There’s really a simple answer, too. Tolerance and intolerance are analogous to matter and anti-matter: They cannot exist in the same place. Just as anti-matter utterly obliviates matter, intolerance will bring any society built on principles of tolerance crashing to the ground.

This is a lesson that emerged from the ashes of World War II, when the world came to realize that fascists had pretended to care about free speech when it came to their own ability to speak loudly and frighteningly, but once they assumed power all free speech for everyone else vanished utterly. Unquestionably the leading voice in this regard was the great philosopher of science, Karl Popper, whose book The Open Society and Its Enemies explored this dynamic in great detail and with brilliant insight.

Afterward, most of the nations of the world—particularly those in Europe who had victims of the fascists—adopted laws prohibiting fascist speech and organizing. However, in the United States, the First Amendment permits no such laws; our traditional bulwark against fascist speech has been comprised of stalwart American citizens standing up and using their own free-speech rights to oppose it. And for most of the half-century following the war, the living memory of the Holocaust and D-Day and the searing lessons about the nature of fascism that came with it have mostly maintained that bulwark.

Now, those memories are fading, and the hip young contrarians are gullibly (or cynically) opening the door to fascist speech again—not only do they signally do literally nothing to oppose fascists, white nationalists, misogynists, conspiracist disinformation specialists, and hate-mongering politicians, they do their damnedest to ensure these same toxic elements free range on the open prairie. Though it bothers them not in the slightest if the speech of one of their opponents gets shut down.

I will admit that there’s a lot about Twitter I miss. I especially miss its didactic qualities resulting from the 280-character limit, because it’s been fascinating to see how readily people can absorb information when it’s delivered in bite-sized chunks (which is why, when you could read my timeline, I had a series of extremely long threads that I used for educating people about history and the principles of fighting authoritarianism). And not being able to connect with my fellow journalists is, frankly, something of a professional debacle.

So I will continue to discuss this with folks at Twitter and report back to you on how it’s going. Hopefully at some point my account will be restored.

In the meantime, did I mention how nice and productive life is when you’re off Twitter?

Published with permission of Daily Kos

More C&L
Loading ...