With less than 45 days remaining until Election Day, a U.S. government agency tracking the influence of foreign actors has again warned the American public about the threat malicious AI-generated content poses to the integrity of our democracy.
The FMIC report concludes that foreign actors from Russia, China, and Iran are actively using AI to improve and accelerate aspects of their disinformation operations aimed to exploit the ideological rifts within American society and manipulate voters. The most common methods used by these malicious actors to legitimize AI-generated disinformation include getting prominent U.S. figures to repost deepfakes as truth, publishing deepfake content on fake social media accounts and websites posing as news outlets, or masking AI-generated forgeries as “leaks,” counting on the salacious label to stir audience curiosity.
Preparing for Escalation in Deepfake Attacks
At Reality Defender, we see these methods of disinformation deployed regularly, whether by malicious actors from rival nations or those operating closer to home. While we have yet to see a disastrous AI-fueled incident that irreversibly impacts election season, FMIC’s report acknowledges that engineers of disinformation are constantly striving to improve the ways in which they weaponize deepfakes, and such an incident may simply be a matter of time.
It is never too late for our institutions and digital platforms to prepare for the next generation of deepfake threats. The methods weaponized by bad actors hinge on the ability of deepfakes to spread undetected, overwhelming digital spaces and drowning out warnings about their illegitimacy. With deepfake detection integrated at every level of public content dissemination, AI-generated disinformation planted by bad actors can be identified at the outset, allowing platforms and public institutions to stay ahead of fake stories and helping voters base their decisions on facts, not sophisticated fictions spun by the power of AI.
As of now, a major AI-fueled incident has not yet greatly disrupted our democracy. We must continue to embrace the tools at our disposal to offset the nefarious side of AI and keep it that way.