Blame the Bots & the Humans

How much influence do bots really have on disinformation?

Matt Spengler
4 min readJun 30, 2021
By: Tima Miroshnichenko on Pexels

It is cinematic to think of someone sitting in front of a trove of monitors somewhere in Eastern Europe commanding their virtual soldiers, promising that the Covid vaccine makes you magnetic. The image of an army of automated bots infiltrating your timeline with fabricated information has overtaken the narrative on how mis- and disinformation spreads. But are those bots really the biggest source of disinformation? This question has been asked frequently over the past five years, and researchers simply cannot agree on the answer.

For instance, an article published by NPR in the beginning stages of the pandemic stated, “Nearly half of the Twitter accounts spreading messages on the social media platform about the coronavirus pandemic are likely bots, researchers at Carnegie Mellon University said on Wednesday.” Yet, Darius Kazemi, an independent researcher specializing in studying how bots work said, “That seems like a lot.” This discrepancy in research is understandable — bots are very hard to spot with the human eye, and many algorithms similarly can’t discern between who is a bot and who is a human. One online tool which shows how an article, headline, or keyword spreads on Twitter will mistake a human for a bot because they tweet frequently, are a relatively new user, or have a bunch of numbers in their Twitter handle. The reverse is also true. Bots are specifically designed to trick algorithms and humans alike to think that they are humans interacting as a normal person would.

This makes it incredibly hard to measure how much influence bots have on social media.

The reality is that bots and humans work in tandem to get their point across, whatever that may be. Bots can and do manipulate headlines or tweets, but it is more likely that they are used as amplification devices. A 2018 Pew Research Report showed that “Among popular news and current event websites, 66% of tweeted links are made by suspected bots.” It is important to note that this is different than foreign “trolls” from say Russia’s Internet Research Agency (IRA) which are professionals who understand culture and how to manipulate certain emotions. This is also different than astroturfing campaigns which fabricate outrage over topics such as Critical Race Theory.

Automated bots, which of course can come from Russia as well as anywhere in the United States, are usually less nuanced and meticulous in their goals. Bots have a broader agenda, mostly to amplify messages already created by political figures. In other words, the humans are doing the messaging, the bots just need to spread the message.

Using a tool such as SparkToro to audit Twitter accounts, you can see that political “influencers” on the Right and the Left probably don’t have as much influence among humans as you think. Many of these figures have follower counts in the millions, but audits suggest that most have a “fake follower” number between 35% and 50%. Meaning that if someone has two million followers, 800,000 might be labeled as “fake,” defined as “accounts that are unreachable and will not see the account’s tweets (either because they’re spam, bots, propaganda, etc. or because they’re no longer active on Twitter.)” Though the definition says that the fake accounts won’t “see” the tweets, that does not mean the tweet won’t be blindly spread by bots. This tool, like the aforementioned tool above, likely mistakes humans for bots and bots for humans to some extent, but it shows the inauthentic nature of how information works on social media.

While this never-ending stream of bots is concerning and has a major impact on how concocted political narratives are widely shared, we tend to focus much of our blame on the bots and overlook the human aspect of disinformation. Actual humans and troll accounts run by humans have the control on what information is spread and how it is presented. In their article, “That Uplifting Tweet You Just Shared? A Russian Troll Sent It,” Darren Linvill and Patrick Warren show how professional trolls, linked to the Russian IRA, became wildly popular because they were skilled at provoking human emotions on the Right and Left. Twitter accounts made to look like humans but likely ran by Russian trolls started out tweeting “heartwarming, inspiring messages” to gain a large following, and once that was achieved, the account would start to build on political narratives.

The most important statement from Linvill and Warren’s article is this:

“Professional disinformation isn’t spread by the account you disagree with — quite the opposite. Effective disinformation is embedded in an account you agree with.”

This is why political talking heads can be so influential and why “debates” on Critical Race Theory and conspiracies about Covid-19 are so widely believed to be true. Very few people on the Left will follow far-Right accounts, simply because they won’t believe any information the account gives. The same is true for those people who follow far-Right accounts and won’t believe any information given by organizations such as The New York Times or NPR.

Bots are very influential in the spread of disinformation, but because we can’t yet accurately measure how influential, we sometimes over-romanticize their impact. If we eradicated every single bot from each social media platform, how different would your feed look? Would it stop you from seeing political figures peddling disinformation? Probably not.

--

--