Media Matters Deceived and Manipulated in Six Steps to Harm Musk’s X, Suit Alleges

The suit alleges that Media Matters manipulated the ad pairing algorithm in order to trick the offensive content filtering system.
Media Matters Deceived and Manipulated in Six Steps to Harm Musk’s X, Suit Alleges
A photo illustration of the new Twitter logo in London on July 24, 2023. (Dan Kitwood/Getty Images)
Petr Svab

Activist group Media Matters manipulated algorithms of the X social media platform to create a situation where ads for some of X’s largest clients appeared next to pro-Nazi content, according to a lawsuit filed by X on Nov. 20.

The activists executed the plan in six steps, manufacturing an X account specifically designed to trick the algorithms to show the offensive content and the desired ads together, the suit alleged.

“The end result was a feed precision-designed by Media Matters for a single purpose: to produce side-by-side ad/content placements that it could screenshot in an effort to alienate advertisers,” said the suit, filed in federal court for the Northern District of Texas (pdf).
Following a Media Matters report that featured screenshots of the ads placements, several large companies stopped their ads on X, including Comcast, Apple, and IBM.

Free Speech Clash

After billionaire tech entrepreneur Elon Musk in 2021 took over Twitter—later renamed X—he made it a policy that offensive but legal content, within some bounds, wouldn’t be removed from the platform, but rather marginalized—a “freedom of speech, not freedom of reach” policy.

Media Matters disagreed with this policy, arguing more content should be removed outright. The group, funded prominently by progressive billionaire George Soros and other progressive donors, has waged a campaign to drive advertisers away from X.

“This November alone Media Matters released over twenty articles (and counting) disparaging both X Corp. and Elon Musk—a blatant smear campaign,” the suit says.

Six-Step Plan

As part of X’s policy, advertisements shouldn’t appear next to certain offensive content, such as promotion of Nazism. The platform has algorithms that should mark the content ineligible for ads, but it also has algorithms that pair users with advertisements based on their content preferences.

The suit alleges that Media Matters manipulated the ad pairing algorithm in order to trick the offensive content filtering system.

The plan was executed in six steps, the suit alleges:

1) Media Matters couldn’t use a brand-new account for the plan because X has a policy that ads don’t show up on accounts younger than 30 days. The activists thus used an older account, dodging the first layer of protection against inauthentic activity.

2) “Media Matters set its account to follow only 30 users (far less than the average number of accounts followed by a typical active user, 219), severely limiting the amount and type of content featured on its feed,” the suit says.

That means that the ad pairing algorithm had limited data to gauge what ads the user wanted to see.

3) The activists looked up accounts that post offensive content that would be objectionable to advertisers. They also looked up X accounts of major advertising brands that they apparently wanted to persuade to stop advertising on X. It then set its inauthentic account to follow these accounts.

“All of these users were either already known for posting controversial content or were accounts for X’s advertisers. That is, 100% of the accounts Media Matters followed were either fringe accounts or were accounts for national large brands,” the suit says.

“In all, this functioned as an attempt to flood the Media Matters account with content only from national brands and fringe figures, tricking the algorithm into thinking Media Matters wanted to view both hateful content and content from large advertisers.”

4) Even in this situation, X claimed, the ads Media Matters wished wouldn’t appear next to the objectionable content. The activists then kept scrolling through the feed and kept refreshing the page.

“Media Matters’ excessive scrolling and refreshing generated between 13 and 15 times more advertisements per hour than would be seen by a typical user, essentially seeking to force a situation in which a brand ad post appeared adjacent to fringe content,” the company said.

The situation the activists created was so contrived that for most of the targeted brands, IBM, Comcast, and Oracle, the problematic ad pairing was only ever produced once—for the account used by Media Matters. For one brand, Apple, the pairing was produced twice, at least one of the instances for the Media Matters account.

“No authentic user of the platform has been confirmed to have seen any of these pairings,” the suit said.

4) Media Matters screenshotted the ad pairings, but did so in a way that cut off any posts above and below and any information about the account it was using.

“Had readers been able to see the posts above and below the pairings, they would have easily gleaned the highly-specific nature of the small number of accounts Media Matters chose to follow,” the suit says.

“Media Matters chose to show every pairing in its article using this deceptive technique—hiding its deceit through even more deceit.”

5) Media Matters set the account to “private” so “blocking anyone from seeing which accounts Media Matters actually followed, thus disallowing anyone from understanding how its feed was manipulated,” the suit said.

6) Media Matters omitted from its report the steps it took to produce the ad pairings and screenshots.

“The overall effect on advertisers and users was to create the false, misleading perception that these types of pairings were common, widespread, and alarming,” the suit said.

Media Matters president and CEO Angelo Carusone called the lawsuit “frivolous” and “meant to bully X’s critics into silence.”

“Media Matters stands behind its reporting and look forward to winning in court,” he said in a Nov. 20 X post.