Industry Insights

Mar 24, 2024

Deepfakes, Disinformation, and War

Person walking forward

Last week saw a deadly attack at a concert in Moscow's Crocus City Hall, which claimed the lives of at least 133 people. Though Russia is currently at war with the Ukraine and perpetually at odds with the United States, U.S. intelligence officials nonetheless warned their Russian counterparts weeks earlier of a potential attack planned by members of ISIL, who has since claimed responsibility.

In the aftermath of the attack, Prime Minister Vladimir Putin and Russian newspapers pinned the attack on Ukrainians. 1. While the accusations were par for the course, the ensuing developments were anything but predictable, as on Rusisa's NTV news channel, a video appeared to show Ukraine's top security official, Oleksiy Danilov, making light of the attack and suggesting that Ukraine was responsible.

This video was quickly proven to be a deepfake, with the audio mashed together from recent interviews from two Ukrainian officials. Yet in a country with loose interpretations of "truth" and a penchant for spreading propaganda quire regularly, such a development that further adds to the ongoing conflict in the region is troubling, to put it mildly.

Deepfakes as a Means of Furthering Conflict

As the Moscow attack and subsequent disinformation campaign demonstrate, deepfakes are increasingly being weaponized to inflame hostilities and sow confusion in already volatile situations. In a conflict scenario, they can be used to falsely attribute inflammatory statements to officials, as was done with the fake Oleksiy Danilov video. Even if the deception is quickly uncovered, the damage may already be done in shaping public perceptions and narratives.

Deepfakes increasingly provide an easy way to manufacture "evidence" to support whatever narrative a regime wants to push, whether pinning blame on an adversary or rallying the population against a supposed enemy. The possibilities of such uses are chilling to consider, especially as deepfake video increases in believability. Imagine a deepfake video of a world leader declaring war, or footage of a "false flag" attack. In the fog of war, such disinformation could have devastating real-world impacts before the truth emerges. Even skeptical viewers may be deceived.

What We Can Do

Combatting the malicious use of deepfakes is a complex challenge. Technological solutions like Reality Defender's AI-based deepfake detection can assist world governments in quickly helping disprove such outrageous claims before they become widespread or do any tangible damage. This type of toolset is something that even the best-equipped agencies are just starting to catch up on implementing at a time when deepfakes are increasingly causing havoc on many facets of society.

Yet even widespread adoption of deepfake detection is not the end-all, be-all solution. Efforts to improve digital literacy and critical thinking skills among the public are absolutely vital — even in propaganda-heavy countries like Russia. Tightening legal frameworks and penalties around deepfakes in international courts and governing bodies is equally essential, as is pressuring social media platforms to responsibly handle synthetic media (and enacting penalties if they do not).

The deepfake after the Moscow attack should serve as a warning of how deepfakes can be abused to stoke the flames of war. We must proactively confront this challenge before fabricated videos and audio become a routine part of the disinformation playbook in conflicts worldwide. The stakes for truth, stability and human lives are too high to ignore.

\ Solutions by Industry
Reality Defender’s purpose-built solutions help defend against deepfakes across all industries
Subscribe to the Reality Defender Newsletter
Raised hand against a soft cloudy sky