Skip to main content

Twitter Bots Are a Major Source of Climate Disinformation

Such accounts can distort online conversations and potentially diminish support for climate policies

Twitter CEO Jack Dorsey testifies remotely during a Senate Judiciary Committee hearing.

Twitter CEO Jack Dorsey testifies remotely during a Senate Judiciary Committee hearing titled, "Breaking the News: Censorship, Suppression, and the 2020 Election" on Capitol Hill on November 17, 2020 in Washington, DC.

Twitter accounts run by machines are a major source of climate change disinformation that might drain support from policies to address rising temperatures.

In the weeks surrounding former President Trump’s announcement about withdrawing from the Paris Agreement, accounts suspected of being bots accounted for roughly a quarter of all tweets about climate change, according to new research.

“If we are to effectively address the existential crisis of climate change, bot presence in the online discourse is a reality that scientists, social movements and those concerned about democracy have to better grapple with,” wrote Thomas Marlow, a postdoctoral researcher at the New York University, Abu Dhabi, campus, and his co-authors.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Their paper published last week in the journal Climate Policy is part of an expanding body of research about the role of bots in online climate discourse.

The new focus on automated accounts is driven partly by the way they can distort the climate conversation online.

“Twitter bots have been this growing force of evil over a half a decade now,” said John Cook, a professor at George Mason University’s Center for Climate Change Communication who was not involved with the study.

Unscrupulous actors “have realized how powerful and influential misinformation can be,” he said. “Twitter bots have been a part of that.”

Marlow’s team measured the influence of bots on Twitter’s climate conversation by analyzing 6.8 million tweets sent by 1.6 million users between May and June 2017. Trump made his decision to ditch the climate accord on June 1 of that year. President Biden reversed the decision this week.

From that dataset, the team ran a random sample of 184,767 users through the Botometer, a tool created by Indiana University’s Observatory on Social Media, which analyzes accounts and determines the likelihood that they are run by machines.

Researchers also categorized the 885,164 tweets those users had sent about climate change during the two-month study period. The most popular categories were tweets about climate research and news.

Marlow and the other researchers determined that nearly 9.5% of the users in their sample were likely bots. But those bots accounted for 25% of the total tweets about climate change on most days.

The bots were also more prevalent in discussions on climate research and news. Other areas of focus for the bots were tweets that included the term “Exxon” and research that cast doubt on climate science. One such tweet highlighted a Nobel laureate in physics who falsely claimed “global warming is pseudoscience.”

“These findings indicate that bots are not just prevalent, but disproportionately so in topics that were supportive of Trump’s announcement or skeptical of climate science and action,” the paper said.

The proportion of bot tweets was smaller on the days immediately surrounding Trump’s decision on the Paris Agreement, the researchers found. That’s because, they believe, people who don’t often tweet about climate change did so at that time and the bots were unable to quickly respond to the flood of climate chatter.

The researchers weren’t able to determine who deployed the bots. But they suspect the seemingly fake accounts could have been created by “fossil-fuel companies, petro-states or their surrogates,” all of which have a vested interest in preventing or delaying action on climate change.

Other researchers who study climate conversations on Twitter have found an even greater prevalence of bot-like accounts. A paper published last year in the Proceedings of the International Conference SBP-BRiMS 2020 estimated that 35% of the accounts that tweeted about climate during the 2018 United Nations Climate Change Conference in Poland were bots.

But that paper, from researchers at Carnegie Mellon University, found there were an equal number of bots that both supported and cast doubt on climate science.

Regardless of which side they’re on, bots are an impediment to curbing the flow of climate misinformation, said Cook, the George Mason professor.

“It is important to shut bots down,” he said in an interview. “It’s really just a matter of social media platforms taking responsibility and aggressively taking down what are flagged as definite bots. To me, that’s the bare minimum that Twitter should be doing.”

The reason Twitter and other platforms haven’t taken that step, Cook said, is because there are financial incentives to ignore the problem.

“Generally speaking, misinformation is good business,” he said.

“Misinformation is more likely to be clicked and liked because it tends to be more sticky,” Cook explained. “And the business model of social media platforms are likes and clicks and shares: The more an item gets interaction, the more money a platform makes.”

False claims about the coronavirus and the presidential election were some notable exceptions to that rule. Only public pressure will change the calculations social media companies make around bots and climate misinformation, he added.

Reprinted from E&E News with permission from POLITICO, LLC. Copyright 2021. E&E News provides essential news for energy and environment professionals.