Industry Insights

Apr 25, 2024

How Russia and China Leverage Deepfakes to Sway Voters

Person walking forward

2024 is a crucial election year for democracies around the world. While hundreds of millions are expected to go to the polls, election disinformation experts are tracking a steep escalation in AI-fueled disinformation from state-affiliated actors in Russia and China. 

In their annual assessment on global security concerns for 2024, U.S. intelligence officials warned that both nations were developing new methods of leveraging generative AI and deepfakes to impact election outcomes.

According to a report from Microsoft analysts, disinformation groups affiliated with Russia have increased their activities in recent months, planting online content with the goal of eroding U.S. support for Ukraine and boosting rumors of election fraud. As per the report, Russia’s disinformation groups are increasingly centralized, connected directly to institutions funded by the Russian government, rather than the splinter cells of Russian intelligence services, as was the case in past campaigns. 

Microsoft’s report looks closely at ways in which these state actors are employing generative AI. Analysts found one of these groups first uploaded a deepfake video onto a public platform, drawing attention to it via the Russia-owned fake news websites around the world. Links to these articles were then shared and spread by Russian officials, expats, and others on social media sites, reaching users who would normally never visit these sites, and lending an air of legitimacy to the deepfake.

The Microsoft report found that campaigns mixing authentic media with AI enhancement are far more successful than ones using only synthetic content. These tactics highlight the need for hosting sites and social media platforms to implement robust real-time deepfake detection tools in their moderation workflow, to ensure that videos engineered to trick the human senses with real footage while integrating manipulated elements can be properly labeled before reaching voters in targeted countries.

AI-Assisted Disinformation from Beijing

Following this playbook, state-affiliated actors in China have been found to regularly employ AI to enhance videos, audio, and memes to deepen existing sociopolitical divides and plant conspiracy theories about recent tragedies, such as the Kentucky train derailment in 2023 or the wildfires in Maui. In a disconcerting example, disinformation engineers produced deepfake photos of the Maui disaster to support a hoax that these fires were caused by the US military testing a “weather weapon.” 

Microsoft’s analysis also found that audiences are particularly susceptible to audio deepfakes. At Reality Defender, we have seen this play out when we partnered with Taiwanese authorities during the country’s last presidential election. In the lead-up to the election, an audio deepfake tarnishing a leading candidate’s reputation was propagated in order to affect the election’s outcome. By utilizing Reality Defender’s state-of-the-art audio deepfake detection models, Taiwan’s investigative authorities were able to determine that the audio was AI-generated, and warn the public about its deceptive nature before it could sway voters.

While we hold no illusions that governments won’t deploy deepfakes to further their geopolitical interests, we hope that countries will soon form agreements about responsible and humane use of AI in statecraft. In the meantime, we will continue to partner with governments and platforms to provide crucial defense against deepfakes. Reality Defender’s detection suite was designed with evolving scenarios in mind, while remaining platform-agnostic and easily integrable into any workflow across use types. Because we’ve seen how tirelessly disinformation engineers work to overcome deepfake protection measures, our experts constantly research new AI models and integrate new capabilities detecting them into our platform, creating a detection system that can be relied upon in shielding democratic elections and voters from growing AI-fueled deception.

\ Solutions by Industry
Reality Defender’s purpose-built solutions help defend against deepfakes across all industries
Subscribe to the Reality Defender Newsletter
Raised hand against a soft cloudy sky