Cyberattack Misinformation Could Be Plan for Ukraine Invasion

A falsified video would be an update on the traditional use of propaganda campaigns during warfare

Hooded person sitting in front of computers

Last week U.S. officials claimed the Russian government was planning to publish a video of a staged “attack” by Ukrainian forces. The officials said their announcement was an attempt to preemptively halt a misinformation campaign that could serve as a pretext for Russian forces to invade. Such propaganda campaigns have been used in wars throughout history—but today’s social media landscape allows misinformation to spread further and have greater impact. In fact, manipulating social feeds with false accounts, bots, targeted ads and other methods can be considered a type of cyberattack. Other cyberattacks include stealing information, holding data for ransom and disrupting the basic functions of targets from hospitals to essential infrastructure.

Misinformation campaigns “are really dangerous to democracy because we were not built for monolithic decision-making at the executive level,” says Justin Pelletier, a professor of practice at the Rochester Institute of Technology’s Global Cybersecurity Institute. “Manipulating public opinion can be a way to delay response—it creates a space of freedom for [more repressive regimes] to maneuver within.” Scientific American spoke with Pelletier about the roles that misinformation, and cyberattacks more broadly, are playing in the conflict between Russia and Ukraine and how these techniques differ from those used in conventional warfare.

[An edited transcript of the interview follows.]


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Even before news broke of this video, you had suggested that, out of all the ways Russia might use cyberattacks in this conflict, the most likely threat would be misinformation. Can you tell me about that?

It’s important to recognize that there are more impactful acute cyberattacks that are real risks. We haven’t really seen that with a cyberattack, but it’s theoretically or conceptually possible—the explosion of a power plant or something would be really low probability but really high impact.

The most likely, and the thing that we’ve probably seen most prevalent from Russia, is deliberate computational approaches to misinformation campaigns. In Russia’s case, [people] don’t necessarily need to hack a bunch of voting records and change the outcome of an election. All they need to do to achieve some of their goals, I expect, is to [introduce misinformation to] create doubt in the minds of the American public about the validity of our electorate. And they can then systematically amplify that doubt through computational means—with Twitter bots and all these advanced misinformation tools that are enabled through cyber—to achieve a sociologically disruptive effect in our democracy.

That reminds me of how, during World War II, Germany used leaflet campaigns and the technology of radio to spread propaganda and attempt to disrupt Allied forces. Is there a significant difference between historical misinformation campaigns and what we are seeing today?

I really like the idea of that being a continuation of the old with new technologies. What’s different now, though, is we have, through cyber means, a mechanism of observation of direct outcomes that allows us to engineer those outcomes in a way that was never before possible. If we look at the science—the psychographic segmentation, the influence stratagems that will lead a person to make a decision and the mapping of that—we’ve now transcended what’s possible in that feedback loop. Before, qualitatively, we could say, “We’re doing radio broadcasts” or whatever. But we didn’t necessarily have a proved demonstrable impact of “And that translates to people liking this polarizing thought leader on Twitter.”

Are there other ways in which cyberattacks are analogous to more traditional aspects of warfare?

We can think about misinformation as a propaganda arm, and I think there are other analogues to make with conventional attacks. There are differences, though. The loss of life from a cyberattack is extraordinarily rare. I think what we see most commonly with a cyberattack is really that idea of espionage, sedition and subversion. If you sabotage some project that sets back a research and development program that’s adversarial to your national interests, that’s a potential use of cyber. Or we see sedition or subversion: encouraging people to resist lawful orders by manipulating misinformation. And then there’s still the kind of denial of service potential that could lead to a loss of life, such as if you have power outages in the middle of winter or if you have canceled surgeries from a ransomware attack in a hospital.

What is cyber, and how does it play a role in warfare? It’s a tool in the kit bag that allows for engagement all the way from cooperation through armed conflict and war, and it can synchronize with the other elements of national power in a really interesting way. Cyberactivity can be anywhere in the continuum of competition, all the way from cooperation—we use cybercampaigns for diplomacy, information, and so on—through to armed conflict and open war. Russia is a really good example because [the nation has] done such a masterful job at understanding the potential to achieve goals across all those elements: diplomatic, information, military and economic. It’s very clear Russian-speaking hackers have impacted energy, health care, finance, voting, infrastructure—they have among the most capable cyberactors on the planet. Of course, it’s difficult to directly tie those attacks to the Russian government. But in some ways, that creates plausible deniability for the Kremlin to leverage cyber as a clandestine or covert element of power projection. We haven’t seen them do anything that would create massive loss of life, but I think there’s some natural deterrent in doing something so overt that it could lead to the declaration of war.

How can people and organizations defend themselves against cyberattacks and misinformation campaigns?

We’ve got a gradient of defense need and potential, from everyday average citizens all the way through multinational corporations and extremely large government agencies and everything in between. For everyday average citizens, just being aware of security patches and running security updates really does help. There’s also a need for skeptical inquiry and critical thinking at the average citizen level. Remembering that our neighbors are just human beings like us with the same kind of goals and that we don’t have to hate them because they’re part of some other group, that’s really useful. It seems pretty basic, but it really is an important thing that will contribute to the viability of our system of government.

With multinational corporations and governments, too, I think they should encourage third-party testing, which brings in a competent third party to find and report vulnerabilities in a way that maybe evaded the [system developers]. And further, we’ve seen the emergence over the last few years of a practice called purple teaming, which involves an iterative mini war game within an organization between those ethical hacker “red teamers” and the organization’s defenders, which we call the blue team. Together they work collaboratively—blending red and blue to make purple—in a way that’s transparent to the defending organization so that they can see what it would look like if there was an advanced, competent adversary operating in their network. Those types of tests reveal the vulnerabilities of people, processes and technologies that are really difficult to find in any other way.

There’s a whole field of study in what we call “normal accident theory” and organizational resilience. And the idea of a normal accident is really applicable to a cyberattack. Whether you’re an individual, a company or a government, if you expect to be breached—just like highway planners expect there to be car accidents—we can build in controls that allow for us to contain the spread and the devastation that’s going to come from the cyberattack.

Sophie Bushwick is tech editor at Scientific American. She runs the daily technology news coverage for the website, writes about everything from artificial intelligence to jumping robots for both digital and print publication, records YouTube and TikTok videos and hosts the podcast Tech, Quickly. Bushwick also makes frequent appearances on radio shows such as Science Friday and television networks, including CBS, MSNBC and National Geographic. She has more than a decade of experience as a science journalist based in New York City and previously worked at outlets such as Popular Science,Discover and Gizmodo. Follow Bushwick on X (formerly Twitter) @sophiebushwick

More by Sophie Bushwick