Musk's definition of “free speech” is juvenile & harmful. Now that he's seeing it's unprofitable, will he make changes?
November 15, 2022

Musk is making massive blunders in his take over of Twitter. It's great fun to point and laugh, which I will happily do. It's an opportunity to show him (& the rest of the world) why his definition of "free speech" is juvenile and harmful. It's also a time to point out that when any social media companies fail to act on known harmful content, there should be consequences.

Sen. Ed Markey asked for answers from Musk about the verification account problem and disinformation on Twitter. Musk, being a juvenile edgelord, mocked the request. Markey, who sits on the subcommittee on Communication, Media and Broadband, tweeted back, "One of your companies is under an FTC Consent decree. Auto Safety watchdog NHTSA is investigating another for killing people. And you're spending your time picking fights online. Fix your companies. Or Congress will. "

Colbert quipped, "That's going to leave a markey."

Here are some other impersonation failures from Musk's Twitter Blue roll out.

Showing companies the financial consequences of their failure to act on harmful rhetoric is a powerful tool for change. It’s one of the methods that I have developed and taught multiple groups & people for the last 15 years.


Bloggers Take on Talk Radio Hosts — New York Times January 15, 2007

A San Francisco talk radio station pre-empted three hours of programming on Friday in response to a campaign by bloggers who have recorded extreme comments by several hosts and passed on digital copies to advertisers.

The lead blogger, who uses the name Spocko, said that he and other bloggers had contacted more than 30 advertisers on KSFO-AM to inform them of comments made on the air and to ask them to pull their ads.

The activist groups and individuals that I have worked with over the years like Color of Change, Free Press, and Angelo Carusone, (now the head of Media Matters for America) have used Musk's failure to understand the harm caused by violating Twitter's own terms and conditions around safety to convince advertisers to leave the platform.

Credit: Free Press Action

Musk's Paul Pelosi tweet might cost him billions in lost revenue

Musk tried to blame the activists for advertisers leaving, but just like my advertiser alert campaign in 2006, the advertisers looked at the situation and made their own decision to stop advertising.

Musk's response reminded me of how the management of the RW radio station KSFO responded when I alerted the advertisers of the violent rhetoric coming from their hosts. First they told the advertisers that it wasn't true, but the advertisers heard for themselves. Then they said the hosts were joking, but I had enough examples to prove that they were serious, including audio clips of them saying, 'I'm not joking!" Instead of the station telling the hosts to stop talking about "putting a bullseye" on Nancy Pelosi, they attacked me and had my website shut down. It was a great narrative flip where they couldn't play the victim.

When advertisers started leaving the station, one caller suggested to a host he should "name and shame" advertisers who left, to punish them. One of the three host's agreed with that idea!
(The same suggestion was made to Musk, look at his bone head response!)



This is stupid behavior based on impulsive emotion. Lashing out at others, instead of looking at and fixing their own behavior, is typical narcissist behavior. THEY can never be the problem. "No one can tell ME what to say on my own show!" one RW host said.

Musk is seeing now what his definition of 'free speech' includes and doesn't include. Mocking him isn't included. Nor is impersonating brands.

Musk is learning TOS exist for a reason. "Hey, falsely impersonating others is bad! It happened to me! Spreading disinformation is bad, it led to harm to me, my bottom line!"

We know harmful content connected with terrorism, racism, misogyny and online hate is very real. I was telling someone I recently met about my work to defund right wing media because management wouldn't take action to stop it. I told her that the movie Hotel Rwanda had a major impact on me and when I heard violent rhetoric coming out of my local radio station I decided to act. I knew that in America people change their behavior when money is involved. I set about showing the radio stations, and then TV stations, that what they thought was an asset, was actually a liability.

In America the impact of financial harm on a company is one of the ways we can drive change. It's good to show everyone that Musk's failure to follow Twitter's own TOS is causing him financial harm. It's also a way that activists can get one powerful group of companies to pressure other companies.

One of the things that I taught the activists I trained is to understand how to help the corporate allies do what you want. So you look for ways that walking away from violent rhetoric, misogyny, or racism is in line with their OWN stated values. Here is one of the real examples I used:

"Hey, United Airlines, I see in your values statement that you don't believe in threatening other people with death. Now listen to this radio host who reads your ads talk about blowing a black man's head off."


But you also need to show them that doing what you suggest is also good for their bottom line. Because there are people in the company who might think "Reaching the valuable demographic that WANTS to kill Democrats is the best way to make money." (They won't come out and say that though. So we remind them that the venue is not the only game in town to advertise on and also, point to advertisers who PAID the price by NOT disassociating their brand from a Toxic host when they had the chance. )

In big companies there are people who are in charge of "The Brand" and being a "good corporate citizen, environmental goodness, diversity, and supportive of various communities like the LGBQT+ folks.

Then there are sales people who look at those and say, "Can we make more money if we ignore this stuff? If so, BOOM, ignore it. We need to meet our quarterly earnings and "feel good diversity BS is getting in the way of revenue."

The diversity people tell the Sales people, "BECAUSE we are doing these things we are making MORE money!" But it's not always easy to measure that, so they need to show the sales people how associating with horrible things hurt sales.

I learned a lot from working with the folks at Color of Change, they really understood how to show top corporate execs that good corporate values can be connected to a positive revenue. And that a failure to do the right thing can also lead to a huge financial failure.

But what if threatening violence is profitable? What if, like Murdoch's Tucker Carlson show, threatening violence doesn't earn the network money directly, but it does earn them power?
This is where Sen. Markey's comment is important and needs to be amplified.


The government has a duty to protect people. A huge number of humans in America agree on this, and it's also in our constitution. Now, let's say Musk bought a company that made good gases and toxic gases, then removed all the safety regulations on all gases. The bad gases then hurt people. The community would demand something be done to stop the harm. Right?

In America we use lawsuits and pressure campaigns to stop corporate harm, but we also pass laws. We need to look at what government can and should do to protect people.

I'm NOT threatening Musk with violence

Twitter's TOS has restrictions on threats of violence. I'll bet that if Elon gets threats of violence toward him he'll remove them. If they are removed, it needs to be pointed out that threats toward others should be removed too. I would say,

"Hey, Elon, you didn't like it when you got threats of violence. You removed them. Other people should have the the same protections."

And since he's an engineer with a low EQ, he thinks he can get an AI program to do all the work. WRONG! Like his "self driving car," AI programs aren't enough yet. He needs a good POLICY & good PEOPLE. Also, that is what the people he fired in the Trust Safety division did for the platform



Four of my favorite activist groups fighting to reduce online harm

My friends in the activist community know that "Brand Safety" matters to companies and they have the financial incentive to protect their brands. They organized and took these actions for a reason, they know some of the same things that upset Brands, like disinformation about medicine, can also hurt people. As individuals it often feels that you have no power to make a change, so I suggest you join up with these groups that are doing great, effective activism.

  1. Sign up for their email alerts
  2. Donate money to them
  3. Support their efforts to make social media a safer place


Here are four of my favorites and what they have said.
Free Press Action Fund
Jessica J. González, co-CEO of Free Press. “Racists and conspiracy theorists are testing how far they can go with spreading lies, harassment and abuse — and misinformation about the midterm election is rampant. This is not the healthy forum that the vast majority of Twitter users want, and it exposes Twitter’s advertising partners to great risk. We’re calling on Twitter to, at a minimum, retain and actually enforce existing community safeguards and content-moderation systems.

Media Matters For America
Angelo Carusone, president of Media Matters for America. “Musk has already put Twitter on that glide path, firing employees responsible for content moderation and brand protection and even tweeting out political conspiracy theories himself. Luckily, major brands that advertise on Twitter, and provide over 90 percent of its revenue each year, can speak up and make it clear: Their buys are contingent on the maintenance of the key brand-safety guidelines and community standards — and they will accept nothing less.”

Color of Change: Tell Advertisers: Keep Twitter Safe

Here is a copy of the open letter they sent to the top Twitter advertisers.

My new favorite group is The Center for Countering Digital Hate
“Elon Musk has consistently failed to comprehend that freedom of speech does not mean freedom to abuse and that online spaces should be safe for women, people of color, the LGBTQ+ community and other marginalized groups,” said Imran Ahmed, CEO of the Center for Countering Digital Hate. ”Twitter must make a clear commitment to retaining existing standards, and Musk should provide a credible plan for applying his undeniable engineering prowess to reducing the prevalence of bots, increasing the detection and pre-publication removal of violative content, and enforcing the rules that prevent his platform from becoming a ‘hellscape’ with what appears to be significantly reduced staff and resources.”

I suggest people read their STAR Framework, a Global Standard for Regulating Social Media.
Safety by Design, Transparency, Accountability and Responsibility.

We have learned from the testimony of insiders and whistleblowers that the companies know the harm they are causing, they can make changes to reduce it, but don't.

Figuring out how to get big tech companies to stop online harm using financial leverage is one tool we can use. We also need others tools, like legislation that protects people and reduces harm to them and our democracy. The status quo cannot stand. It has a damaging impact on individuals, communities and our democracies.

Here's a quote from Imran Ahmed about it,

We cannot continue on the current trajectory with bad actors creating a muddy and dangerous information ecosystem and a broken business model from Big Tech that drives offline harm. We need to reset our relationship with technology companies and collectively legislate to address the systems that amplify hate and dangerous misinformation around the globe.

 Imran Ahmed, CEO The Center for Countering Digital Hate

Oh and btw if you are on Mastodon I'm

Can you help us out?

For nearly 20 years we have been exposing Washington lies and untangling media deceit, but now Facebook is drowning us in an ocean of right wing lies. Please give a one-time or recurring donation, or buy a year's subscription for an ad-free experience. Thank you.


We welcome relevant, respectful comments. Any comments that are sexist or in any other way deemed hateful by our staff will be deleted and constitute grounds for a ban from posting on the site. Please refer to our Terms of Service for information on our posting policy.