Elizabeth Warren deserves a warm round of applause for being the first leading Democratic presidential candidate to recognize that, as we’ve been saying for a while, the rise of white nationalism is a major national issue worthy of full discussion in the 2020 presidential campaign.
On Tuesday, Warren released the latest of her plans under the “I’ve got a plan for that” theme: “Fighting Back Against White Nationalist Violence.” It’s detailed, thoughtful, and largely achievable.
It’s also, unfortunately, incomplete.
Warren’s plan covers most of the essential bases:
- Prioritizing the prosecution and investigation of white nationalist violence.
- Keeping guns away from those at risk of violence.
- Early intervention to prevent extremist violence.
- Police reform to protect targeted communities.
- Building and rebuilding trust in targeted communities.
This is a terrific start, with each outlined point an essential component in any kind of complete plan. Warren’s details on each point—particularly the first one—are also thorough and well thought out, particularly in terms of being achievable.
What’s missing, however, is any discussion of or attempt to confront the main driver of this phenomenon, the 500-pound gorilla in the room: online radicalization.
It’s worth noting that one of Warren’s rivals, Sen. Kamala Harris, already came out with her own proposal for “Combating Violent Hate.” And that plan does include proposals to bolster programs intended to counter the spread of violent extremism before it begins, and mentions putting “pressure on online platforms to take down content that violates their terms and conditions.” However, Harris’ proposal is short on specifics, and still does little to address the tide of radicalization sweeping across the web.
We know already the horrific dimensions of the problem. Literally millions of young people, particularly angry young men, are being exposed to alt-right recruitment through a multitude of platforms on the internet—ranging from vile message board sites like 4chan to seemingly benign video game chat rooms—and thousands are being successfully recruited into those ideologies, a process they call “red-pilling.”
These ideologies are built around a farrago of phony “race science” and conspiracy theories, a combination that, as the FBI has found, has a powerfully unhinging effect that in fact helps fuel acts of violence. As a result, a steady stream of red-pilled white nationalists have been undertaking a variety of acts of domestic terrorism, most of them sequentially inspired by previous acts. These include such horrors as mass killings in Pittsburgh and in El Paso, as well as an attempted bombing spree by a fanatical Donald Trump supporter directed at news media and top Democratic politicians.
This phenomenon has spread from American shores to inspire terrorism around the globe, in places such as Christchurch, New Zealand; Oslo, Norway; and Halle, Germany. A senior counterterrorism official recently remarked that the United States is now seen globally as an exporter of white nationalism and its accompanying acts of violence.
The problem is a matter not just of terrorism, but of the broader toxic effects of white nationalist ideology. As we noted a while back:
For the past three years, hate crimes have soared to record levels—and even so, it is certain that the numbers reported are severely undercounted. The Anti-Defamation League also recorded a dramatic increase in propaganda incidents involving white nationalists, particularly as they have focused much of their efforts into recruiting on college campuses.The tide is cresting. The Southern Poverty Law Center recently reported that it had recorded an all-time high in numbers of hate groups operating in America, 1,020, much of it attributable to the apparent toxic influence of Donald Trump, to whom white nationalists have been pledging allegiance since well before the election.
There are a number of ways to tackle this radicalization, ranging from government funding violent-extremist intervention efforts to police reform—particularly with regard to the police’s egregious mishandling of violent far-right street demonstrations—but the centerpiece will have to be a focus on reform within the internet as an industry. The inability of internet platforms—from YouTube to Twitter to Facebook to Instagram, and beyond—to effectively remove extremist and conspiracist content tells us that the public and government leaders will have to play major roles in forcing reform within the industry.
This inability has been on spectacular display recently, as evidence has emerged that key players at Facebook have insisted on preserving the status of “reliable source” for websites that in fact have been firehoses of disinformation, much of it white nationalist in nature, such as Breitbart and The Daily Caller, the latter of which Facebook actually employs. The white nationalist sourcing of those publications has recently been starkly exposed by the SPLC’s revelations about White House aide Stephen Miller’s shoveling of information from the racist far right to the newsrooms of those publications.
Peter Neumann’s International Centre for the Study of Radicalisation and Political Violence at King’s College London studied the problem of American online radicalization in depth as early as 2013 and published a comprehensive overview of how authorities can tackle it. The first step: reducing the supply. But that comes with a major caveat.
“First comes the recognition that—for constitutional, political, and practical reasons—it is impossible to remove all violent extremist material from the Internet and that most efforts aimed at reducing the supply of violent extremist content on the Internet are costly and counterproductive,” the study explains at the outset.
Next: reducing the demand. These measures would work, “for example, by discrediting, countering, and confronting extremist narratives or by educating young people to question the messages they see online.
Finally: exploiting the internet. Making practical use of online content and interactions for the purpose of gathering information, gaining intelligence, and pursuing investigations is essential for preventing violence and terrorism.
Neumann and his team explain that the steps involved in reducing the supply of the content are limited, particularly in the United States with its constitutional free-speech protections and censorship is extremely circumscribed. In European nations, nationwide filters and legal restrictions may affect the spread of extremist material where laws exist prohibiting it, but that won’t affect American consumers to any appreciable extent. Indeed, “most of the traditional means for reducing the supply of violent extremist content would be entirely ineffective or of very limited use in the U.S. context,” the study explains.
The most viable option in the United States would involve commercial takedowns of extremist material from the platforms where they fester, such as YouTube, Facebook, and Twitter. Those companies have, since early 2019, begun making serious efforts at removing extremist content from their platforms, though with mixed results.
“One practical option could be for government agencies to create and, where appropriate, strengthen informal partnerships with Internet companies whose platforms have been used by violent extremists,” the study suggests. “The objective would be to assist their takedown teams—through training, monthly updates, and briefings—in understanding national security threats as well as trends and patterns in terrorist propaganda and communication. As a result, online platforms such as Facebook and Google would become more conscious of emerging threats, key individuals, and organizations, and could align their takedown efforts with national security priorities.”
Reducing demand for extremist and terrorist material online is a much more complicated proposition. Neumann’s study focuses on “activating the marketplace of ideas”—that is, conducting outreach in the very spaces where the radicalism is growing. It notes that chief among the drawbacks to engaging people online is the decided “enthusiasm gap”: “Instead of having extremist views drowned out by opposing views, the Internet has amplified extremists’ voices.” The strategy is also hampered by significant gaps in the pluralism of ideas, as well as major gaps in skill level.
More promising, it is suggested, would be measures aimed at “creating awareness” in a way that was effective and “building capacity in order to assure that alternative voices are heard.” Countermessaging—which would expose people to messages that are specifically designed to counter the appeal of extremism—can also work, but it too has limitations when coming from officials or authorities. For such campaigns to really work, they have to be fueled and propagated at the grassroots level, by ordinary people.
The study also discusses ways that the internet can be an effective tool for gathering intelligence on political extremists, because so many of them organize online. It can also be used by investigators in collecting evidence of crimes afterwards.
However, the centerpiece of the study is its finding that promoting digital media literacy is the “most long-term—yet potentially most important—means of reducing the demand for online extremism.” “In recent years, educators and policymakers have recognized the unique risks and challenges posed by the Internet,” the study notes. “Most efforts have focused on protecting children from predators and pedophiles, with the result that—in practically every school—kids are now being taught to avoid giving out personal details and to be suspicious of people in chat rooms. Little, however, has been done to educate young people about violent extremist and terrorist propaganda.”
Neumann says that families and friends still hold the keys to preventing radicalization, because they see the effects in real life, while others are seeing it online. Often, red-pilled young men are hiding those activities, but their alienation and increasing anger levels also become manifest in their daily workaday and family lives. “We know that in terrorism cases, for example, when people radicalize, we know that a lot of extremists who may be very deep in their extremists worlds still have so-called bystanders—that is, they have friends, family, people around them, colleagues at school, but most importantly family who still have an influence on them.”
The problem, of course, extends beyond even the violence and terrorism that occur as the final outcome of white nationalist ideology, and includes all of the vicious, ugly, and intimidating behavior that occurs in between. This includes the vile threatening behavior that occurs online, as Mia Brett explored in The Forward, particularly the kind directed at women and Jews.
We need our institutions to start taking this online harassment seriously, if not for the women being hurt, then for society at large. These men should not be dismissed as harmless trolls sitting in their mothers’ basements, taking their anger out on women because they’re too scared to talk to any in real life. Many of these men are otherwise normal, functioning members of society who also might be hurting the women in their offline lives, or might be planning to use their guns for a political statement.Many of these accounts go unpunished on Twitter: if the site used their algorithms to catch white nationalists they would also likely have to suspend Republican Twitter accounts.
Warren’s proposal, unfortunately, is missing any recognition of these dimensions of the problem. And any plan to tackle white nationalism will remain incomplete, toothless, and ineffective until it contains such recognition.
Published with permission of Daily Kos