Elon Musk’s Lawsuit Confirms Hate-Adjacent Ads on X as ‘Thermonuclear’

Elon Musk’s X, formerly Twitter, has filed a lawsuit alleging defamation by a news organization over claims that major companies had ads appear next to antisemitic content. But the suit appears to confirm the very thing it claims is defamatory.

Media Matters last Thursday published an article with screenshots showing ads from IBM, Apple, Oracle and others appearing next to hateful content — like, full on pro-Hitler stuff.

IBM and Apple have since pulled their ads from X, no doubt a serious blow for a company already facing an exodus of advertisers. (It didn’t help that Musk himself appeared to personally endorse some antisemitic views.)

The article provoked Musk’s wrath, and the billionaire over the weekend vowed that “The split second court opens on Monday, X Corp will be filing a thermonuclear lawsuit against Media Matters and all those who colluded in this fraudulent attack on our company.”

The lawsuit was indeed filed, but it appears to be missing the promised warhead. You can read it here, it’s quite short. The company alleges that Media Matters defamed X, having “manufactured” or “contrived” the images; that it had not “found” the ads as claimed, but rather had “created these pairings in secrecy.” (Emphasis theirs.)

Had these images been actually manufactured or created in the way implied the language here, that would indeed be a serious blow to the credibility of Media Matters and its reporting. But X’s lawyers don’t mean that the images were manufactured — in fact, CEO Linda Yaccarino posted today that “only 2 users saw Apple’s ad next to the content,” which seems to directly contradict the idea that the pairings were manufactured.

Media Matters certainly set up the conditions for those ads to appear by using an older account (no ad filter), then following only hateful accounts and the corporate accounts of advertisers. Certainly the number of users following only neo-Nazis and major tech brands is limited. But the ads unequivocally appeared in the feed next to that content, as Yaccarino confirmed.

The lawsuit says that these accounts were “known to produce extreme, fringe content,” yet they were not demonetized until after Media Matters pointed them out. So X knew they were extreme, but did not demonetize them — that is what the lawsuit expressly states.

So there does not appear to be anything inherently fraudulent or manufactured about claiming those ads appeared next to that content. Because they did. It just hadn’t happened to an “authentic user” yet, but the conditions for that to happen were not really that outlandish. Angelo Carusone, who heads up Media Matters, also pointed out on X shortly after Yaccarino’s confirmation that ads were placed on a search for “killjews.”

Moderation of hateful content is incredibly hard, of course, and most social networks have found that it is a constant battle against mutations of hateful hashtags, user names, and slang. But Yaccarino earlier claimed that brands were “protected from the risk of being next to” hateful content. Incompletely, it seems.

The edge case shown by Media Matters may not be representative of the average user, but it does show something that is perfectly possible on X, and advertisers seem to have, quite rationally, declined to take that risk. Even ones that weren’t mentioned, X’s lawyers write:

That’s probably not true. For instance, Lionsgate specifically said that “Elon’s tweet” was the reason for their decision to leave.

The lawsuit, filed in the Northern District Court of Texas, demands $100,000 in damages and a jury trial, though neither outcome seems likely.